Thursday, February 28, 2019

calculus - Why Cauchy's definition of infinitesimal is not widely used?

Cauchy defined infinitesimal as a variable or a function tending to zero, or as a null sequence.


While I found the definition is not so popular and nearly discarded in math according to the following statement.



(1). Infinitesimal entry in Wikipedia:



Some older textbooks use the term "infinitesimal" to refer to a variable or a function tending to zero



Why textbooks involved with the definition is said to be old ?


(2). Robert Goldblatt, Lectures on the Hyperreals: An Introduction to Nonstandard Analysis, P15 (His = Cauchy's) enter image description here Why says 'Even'?


(3). Abraham Robinson, Non-standard analysis, P276 enter image description here why Cauchy's definition of infinitesimal, along with his 'basic approach' was superseded?


Besides, I found most of the Real analysis or Calculus textbooks, such as Principles of mathematical analysis(Rudin) and Introduction to Calculus and Analysis(Richard Courant , Fritz John), don't introduce Cauchy's definition of infinitesimal, Why ? Why Cauchy's definition of infinitesimal was unpopular and not widely used, and nearly discarded?


P.S. I refered some papers still cannot find the answer.

real analysis - Prove $lim_{a to infty} frac{3a+2}{5a+4}= frac35$ using definition of limit

Prove the limit using definition of limit$$ \ \lim_{a \to \infty}\ \frac{3a+2}{5a+4}= \frac{3}{5} $$



Answer: Let $\varepsilon \ >0 $. We want to obtain the inequality $$\left|\frac{3a+2}{5a+4}- \frac{3}{5}\right|< {\varepsilon}$$
$$ \Rightarrow \left|\frac{3a+2}{5a+4} - \frac{3}{5}\right|\ =\left|\frac{5(3a+2)-3(5a+4)}{5(5a+4)}\right|\\= \left|\frac{-2}{5(5a+4)}\right|\le\frac{1}{a} $$
Therefore we choose $K \in N$ s.t $K> \frac{1}{\varepsilon} $
$$\Rightarrow\left|\frac{3a+2}{5a+4}- \frac{3}{5}\right| \le\frac{1}{a}\le\frac{1}{K} < \varepsilon $$ Is this correct?

Find limit of this decreasing sequence



$$a_n=\left(1-\frac{1}{2^2}\right)\left(1-\frac{1}{3^2}\right)\left(1-\frac{1}{4^2}\right) \cdots
\left(1-\frac{1}{n^2}\right)
$$



I have proved that this sequence is decreasing. However I am trying to figure out how to find its limit.


Answer



Hint: Rewrite each $1-\frac{1}{k^2}$ as $\frac{(k-1)(k+1)}{k^2}$ and observe the mass cancellations. It will be useful to do this explicitly for say the product of the first $5$ terms.



Wednesday, February 27, 2019

limits - $f(x+y) = f(x)+f(y)$ continuous at $x_0=0 implies f(x)$ is continuous over R?


Let $x,y \in R$


$f(x+y) = f(x)+f(y)$


is it true that if $f$ is continuous at $x_0=0$, than $f$ is continuous in $R$?


Answer




At any arbitrary $x_1\in\mathbb{R}$ and any $\Delta\neq 0$, we have $$ f(x_1+\Delta)-f(x_1)=f(\Delta)=f(\Delta+x_0)-f(x_0). $$ As $\Delta\to 0$, the rightmost expression above goes to $0$ due to continuity at $x_0$, so the leftmost expression also goes to $0$. This implies continuity at $x_1$ and therefore in $\mathbb{R}$. Note we don't need the fact that $x_0=0$.


calculus - Formula for $r+2r^2+3r^3+...+nr^n$





Is there a formula to get $r+2r^2+3r^3+\dots+nr^n$ provided that $|r|<1$? This seems like the geometric "sum" $r+r^2+\dots+r^n$ so I guess that we have to use some kind of trick to get it, but I cannot think of a single one. Can you please help me with this problem?


Answer



We have
$$1+r+r^2 + \cdots + r^n = \dfrac{r^{n+1}-1}{r-1}$$
Differentiating this once, we obtain
$$1+2r + 3r^2 + \cdots + nr^{n-1}= \dfrac{nr^{n+1}-(n+1)r^n + 1}{(r-1)^2}$$

Multiply the above by $r$ to obtain what you want.


Tuesday, February 26, 2019

calculus - Evaluating $int_{-infty}^infty 1-e^{-frac{1}{x^2}}{rm d}x$



I am interested in the improper integral: $$I=\int_{-\infty}^\infty 1-e^{-\frac{1}{x^2}}{\rm d}x=2\int_{0}^\infty 1-e^{-\frac{1}{x^2}}{\rm d}x$$ which I am fairly sure converges.




I broke the integral into one over $1$ and one over the exponential and then tried to evaluate this through a change to polar coordinates similar to the way to evaluate the Gaussian Integral.



However, when I attempt to introduce the variable change $\frac{1}{x}=u$ to apply the polar coordinates, I am left with the bounds being $\lim_{\epsilon\to0}[\epsilon,-\epsilon]$. I am not sure of how to convert these bounds into polar form in terms of $r$ and $\theta$. If anyone can give me a hint of where to continue, what I am doing wrong, or if there is a better way of evaluating this integral it would be greatly appreciated.


Answer



As I said I would, I'll add my two cents (it's more or less self-explanatory, I hope):
\begin{align}\int^\infty_0(1-e^{-1/x^2})\,dx&=\int^\infty_0\frac{1-e^{-x^2}}{x^2}\,dx
\\&=\int^\infty_0\int^1_0e^{-ax^2}\,da\,dx
\\&=\int^1_0\int^\infty_0e^{-ax^2}\,dx\,da
\\&=\int^1_0\frac{\sqrt{\pi}}{2\sqrt{a}}\,da
\\&=\sqrt{\pi}

\end{align}


calculus - Big-$O$ inside a log operation



I would appreciate help in understanding how:



$$\log \left(\frac{1}{s - 1} - O(1)\right) = \log \left(\frac{1}{s - 1}\right) + O(1)\text{ as }s \rightarrow 1^+$$



I thought of perhaps a Taylor series for $\log(x - 1)$, but that would have an $O(x^2)$ term.



Also I would appreciate any reference or tutorial recommendations to get a good understanding of big-$O$ notation in general.




Thanks very much.


Answer



$f(s)=O(1)$ as $s\rightarrow1^+$ means that there is a constant $C$ such that $|f(s)|

One thing to always keep in mind is that equalities between things with $O$'s or $o$'s are not really symmetric equalities. This means that it is probably safe to prove this 'in both directions'.



The left hand side is talking about a function $f(s)$ such that there is a constant $C$ such that $|e^{f(s)}-\frac{1}{s-1}|

The right hand side is talking about a function $g(s)$ such that there is a constant $D$ such that $|g(s)-\log(\frac{1}{s-1})|


Let us check that the function on the left $f$ satisfies the description on the right. According to its description, we can write $f(s)$ as $\log(\frac{1}{s-1}+h(s))$, where $h$ is bounded as $s\rightarrow 1^+$. We subtract $\log(\frac{1}{s-1})$ and get $\log(1+(s-1)h(s))$. Since $h$ is bounded and $(s-1)\rightarrow 0$ then this subtraction is bounded. This was the description on the right.



Now, we do the other direction. Let us check that $g$ satisfies the description on the left. From what we know of $g$ we can write it as $\log(\frac{1}{s-1})+a(s)$ with $a$ bounded. We now compute $e^{\log(1/(s-1))+a(s)}-\frac{1}{s-1}=\frac{e^{a(s)}}{(s-1)}-\frac{1}{(s-1)}=\frac{e^{a(s)}-1}{(s-1)}$ ... oops.



I guess that is why this is not really a symmetric equality.



So, reading from left to right the equality is true. Reading from right to left, it is not.


Monday, February 25, 2019

complex analysis - Euler's formula with base 10?



Does Euler's formula ($e^{ix} = \cos x + i \sin x$) work in base $10$? If it does, how could I express it?



Answer



We can try converting the formula to base $10$.



Let $\log$ be the natural (i.e., base $e$) logarithm. Then $10 = e^{\log10}$, so:



$$
10^{ix} = e^{ix\log10}=\cos(x\log10) + i\sin(x\log10)
$$


improper integrals - Evaluating $int_{-infty}^{infty}e^{-x^2}dx$ using polar coordinates.


I need to express the following improper integral as a double integral of $x$ and $y$ and then, using polar coordinates, evaluate it.


$$I=\int_{-\infty}^{\infty}e^{-x^2}dx$$


Plotting it, we find a Gaussian centered at $x=0$ which tends to infinity to both sides. We can easily express it as a double integral :


$$I=\int_{0}^{1}\int_{-\infty}^{\infty}e^{-x^2}dxdy$$


Evaluating both using Wolfram Alpha gives $\sqrt{\pi}$, so it converges.


I know that $x=rcos(\theta)$ and that $dxdy=rdrd\theta$, but substituing this in the above integral and evaluating $\theta$ from $0$ to $2\pi$ and $r$ from $0$ to $\infty$ doesn't yield the correct answer. What's wrong here?


Thanks a lot !


Answer



You could try: $$I^2=\left ( \int_{-\infty}^{\infty}e^{-x^2}\mathrm{d}x \right ) \left( \int_{-\infty}^{\infty}e^{-y^2}\mathrm{d}y \right) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2+y^2)}\mathrm{d}x\ \mathrm{d}y$$ then use the polar coordinates to compute the double integral.


real analysis - How can I find $lim_{nrightarrowinfty}(1+frac{x}n)^{sqrt{n}}$?




How can I find $$\lim_{n\rightarrow\infty}\left(1+\frac{x}n\right)^{\sqrt{n}}\;?$$




I know $\lim_{n\rightarrow\infty}\left(1+\frac{x}n\right)^{n} = \exp (x)$ but I don't know how can I put the definition in this particular limit.




I know then, that $\lim_{n\rightarrow\infty}\big(1+\frac{x}n\big)=1$, but I don't think this is right to consider.


Answer



$$\lim_{n\rightarrow\infty}\left(1+\frac{x}n\right)^{\sqrt{n}} = \lim_{n\rightarrow\infty}\left[\left(1+\frac{x}n\right)^{{\frac{n}{x}}}\right]^{{\frac{x}{n}}{\sqrt{n}}}$$
From
$$\lim_{n\rightarrow\infty}\left[\left(1+\frac{x}n\right)^{{\frac{n}{x}}}\right]=e \quad \text{and} \quad \lim_{n\rightarrow\infty}{{\frac{x}{n}}{\sqrt{n}}}=0,$$ **, we get
$$\lim_{n\rightarrow\infty}\left(1+\frac{x}n\right)^{\sqrt{n}} = e^0=1$$



EDIT
I add the note bellow as my calculation was considered insufficiently justified




**and because the terms are positive, and we don't have an indeterminate case $0^0$ or $1^{\infty}$ or $\infty ^0,\;$


real analysis - Integral and measure theory question

Let E_n be a sequence of Lebesgue measureable sets in [0,1]. Suppose that for $0 \leq k \leq 1$ we have that
$m(E_n \cap [0,r])= kr$
for any r, such that $0 \leq r \leq 1$.



Prove that the
$$\lim_{ n \rightarrow \infty} \int_{E_n} f(x) dx= k \int_{[0,1]} f(x) dx,$$
where $f \in L^1([0,1])$.



I have attempted the following.
$$k \int_{[0,1]} f(x) dx = \lim_{n \rightarrow \infty} \frac{m(E_n \cap [0,r])}{r} \int_{[0,1]} f(x)dx

= \lim_{n \rightarrow \infty} \int_{[0,r]} \frac{\chi_{E_n} (t)}{r} dt \int_{[0,1]} f(x) dx= \int_{[0,1]} \int_{[0,r]}\frac{\chi_{E_n} (t) f(x)}{r} dt dx.$$



I want to somehow change the order of integration to change $\chi_{E_n}(t)$ to $\chi_{E_n}(x)$ (perhaps applying Fubini Tonelli), but I don't think its possible.



*******Applying the comments suggestions********



Since step functions are dense in $L^1$ there exist $\phi_l \nearrow f$. We note that we can apply DCT because $\phi_l \leq f$ and $f \in L^1$.



$$\lim_{n \rightarrow \infty} \int_{[0,1]} \chi_{E_n}(x) \lim_{l \rightarrow \infty} \phi_l(x)dx=
\lim_{n \rightarrow \infty} \lim_{l \rightarrow \infty} \int_{[0,1]} \chi_{E_n}(x) \phi_l(x)dx.$$




I want the change the order of the limits to say that
$$\lim_{n \rightarrow \infty} \lim_{l \rightarrow \infty} \int_{[0,1]} \chi_{E_n}(x) \phi_l(x)dx=
\lim_{l \rightarrow \infty} \lim_{n \rightarrow \infty} \int_{[0,1]} \chi_{E_n}(x) \phi_l(x)dx=
\lim_{l \rightarrow \infty} k \int_{[0,1]} \phi_l (x) dx =
k \int_{[0,1]} f(x)dx.$$



I don't know how to justify that I can indeed change the order of the limits.

Sunday, February 24, 2019

calculus - Differential Notation Magic in Integration by u-Substitution





I'm really confused now. I always thought that the differential notation $\frac{df}{dx}$ was just that, a notation.



But somehow when doing integration by u-substitution I'm told that you can turn something like this $\frac{du}{dx} = 2x\;$ into this $\;du = 2x\ dx$.



But how is that even possible? I understand that the notation comes from the fact that $\frac{du}{dx}$ actually means the limit of the difference in $u$ over the difference in $x$, with $\Delta x$ approaching $0$.




$$u'(x) = \frac{du}{dx} = \frac{du(x)}{dx} = \lim_{\Delta x\to 0} \frac{u(x+\Delta x)\ -\ u(x)}{(x+\Delta x) - x} = \lim_{\Delta x\to 0} \frac{u(x+\Delta x)\ -\ u(x)}{\Delta x}$$



So if $\frac{df}{dx}$ is just a notation for the limit mentioned above, then what is the underlying argument to say that you can treat $\frac{du}{dx}$ as if it were an actual fraction?



Appreciate the help =)


Answer



It is really just a notation. And the trick with the substitution e.g. $du = 2xdx$ does not have any mathematical meaning, it is just a convenient way of memorizing the integration by substitution rule/law/theorem:



$$\int_a^b f(\phi(t)) \phi'(t) dt = \int_{\phi(a)}^{\phi(b)} f(x)dx $$




Going from left to right you might want to make the substitution $x=\phi(t)$. Our mnemonic tells us to $\frac{dx}{dt} = \phi'(t)$ or in other words that you have to replace $\phi'(t)dt$ with $dx$ if you replace $\phi(t)$ with $x$. If you look again at the equation above you see that this mnemonic does a nice job, so we do not have to memorize this whole equation.



I do use the mnemonic but still I always keep this equation in mind when doing so.


trigonometry - Simplify $frac{cos x+cos y}{sin(x+y)}$

I have tried all kinds of different method, and have spent more than an hour on this problem, also looked upon internet looking for similar problems but none worked. All I got in the end, I can get my answers to $\dfrac{\csc y+\csc x}{\tan x+\tan y}$, but pretty sure that doesn't answer the problem, and tried using identities but in the end they always become much more complicated than I couldn't clear up, so I pretty much did the opposite of simplify.



thank u for your comment
how to u use the sum to product rule on sin(x+y) i only found identity that sin(x+y)= sinxcosy+cosxsiny?
does sin(x+y)= sinx+siny? or u used the sum rule and then the product rule of sinx*cosy identity which didn't work for me

real analysis - Compute $sumlimits_{n=1}^inftyfrac{1}{(n(n+1))^p}$ where $pgeq 1$

I was recently told to compute some integral, and the result turned out to be a scalar multiple of the series $$\sum\limits_{n=1}^\infty\frac{1}{(n(n+1))^p},$$ where $p\geq 1$. I know it converges by comparison for $$\dfrac{1}{(n(n+1))^p}\leq\dfrac{1}{n(n+1)}<\dfrac{1}{n^2},$$ and we know thanks to Euler that $$\sum\limits_{n=1}^\infty\frac1{n^2}=\frac{\pi^2}6.$$ I managed to work out the cases where $p=1$ and $p=2$. With $p=1$ being a telescoping sum, and my solution for $p=2$ being $$\frac13\pi^2-3,$$ which I obtained based on Euler's solution to the Basel Problem. I see no way to generalize the results to values to arbitrary values of $p$ however. Any advice on where to start would be much appreciated.


Also, in absence of another formula, is the series itself a valid answer? Given that it converges of course.

elementary number theory - Subsets of the Integers that Cover All Residues Modulo $n$



Call a subset of the integers special if, for every integer $n$, every residue from $0$ to $n-1$ is obtained when all elements from the subset are taken modulo $n$. Is the set of all non-decreasing integers special? Non-decreasing integers never have digits decrease from left to right in base $10$. Examples are 11134, 44489, and 567.




It seems as if it would be true. Since I saw no method for a direct proof, I first looked for counterexamples. Then I thought that perhaps no such integer is congruent to 10 modulo 11. I searched hard and found no such example. I'm not sure how to proceed with either route (i.e. either proving the statement directly or disproving it by proving that 10 modulo 11 is never obtained).


Answer



Dunno about $10$ mod $11$. But it's obvious that the set of non-decreasing integers is not special: Consider $n=100$.


Saturday, February 23, 2019

calculus - Is a function $f$ integrable iff it satisfies the IVT?



I was having a conversation with another MSE user today and he mentioned that he had heard that "IVT was sufficient for a function to be integrable". When I asked if he meant it was "if-and-only-if" or just "if", he said "if-and-only-if".



This made no sense to me - I imagined a step-function which was 0 for $x<1$ and was 1 for $x\geq1$. This seemed to me to be integrable (with definite integral 1 between 0 and 2) and yet clearly did not satisfy the IVT.




He countered with "step functions are NOT integrable" and further said that he had found the place in the textbook which mentioned this and had verified that it WAS if-and-only-if... Before I could say anything, a movie we wanted to see started.



Where did I go wrong?



I assume he's correct because:
a) He's usually correct and b) I just started calculus and he's been doing it for years.



However, I thought that my step function was integrable, by the definition of integration that I learnt from Spivak's Calculus a la lower and upper sums.


Answer




You are correct: step functions don't satisfy the intermediate value property (unless there's only one step…), but they are always integrable. On the other hand, there are differentiable functions $f$ such that $f'$ is not Riemann-integrable. But, since it is a derivative, $f'$ has the intermediate value property (by Darboux's theorem).


real analysis - Provided $f$ is continuous at $x_0$, and $f(x+y) = f(x) + f(y)$ prove $f$ is continuous everywhere.

My attempt...



By definition, whenever $|x- x_0| < \delta$ we have $|f(x) - f(x_0)| < \epsilon$. Observing that


\begin{align} |f(x) - f(y)| &= |f(x -x_0 + x_0) + f(y)| = |f(x-x_0) + f(x_0) - f(y)| \newline \newline &\leq \epsilon + |f(y) - f(x_0)| = \epsilon + |f(y-x_0)|... \end{align}



Here I need to choose a delta that can depends on $\epsilon$ and $y$ s.t. whenever $0<|x-y|< \delta$ then the above inequality is bounded by any $\epsilon$.


I'm also, in general, having trouble understanding this concept of continuity on an interval. I believe the structure of the definition is: for any $\epsilon> 0$ and any number $y$ in the interval, there exists a $\delta$ that depends on $\epsilon$ and $y$ such that for all $x$ in the interval and $|x - y | < \delta$ then $|f(x) - f(y)| < \epsilon$.


This definition makes me tempted to just choose y to be in the same delta neighborhood as $x$ in the given statement, but that constricts continuity to a small interval.



Edit: This question assumes no knowledge of Lebesgue measure theory.

real analysis - For lebesgue integrable function $f$, show that there exists a sequence ${x_n} rightarrow infty $ such that $x_n| f(x_n) | rightarrow 0$


As in the title, $f$ is any integrable function, and we want to show that there exists $x_n \rightarrow \infty$ as $n \rightarrow \infty$ with the desired property. So far I imagine that it's easy to find $x_n$ increasing slowly enough that the tails of $|f(x_n)|$ will go to zero faster than $x_n$ diverges as $n \rightarrow \infty$. However, it's not true that all integrable functions decay to zero, as I've come across some counterexamples of that. I want to use some consequences of the integrability of $f$ but haven't found the right way to apply them.


I also thought of choosing some slowly diverging $x_n$ like $\log(n)$ and supposing that $x_n |f(x_n)|$ does not converge to zero in order to obtain some contradiction of $f$ being integrable.


Any hints or directions are greatly appreciated!


Answer



The assertion is equivalent to


$$\liminf_{x \to \infty} \big( x |f(x)|\big) =0. \tag{1}$$



We prove by contradiction that $(1)$ holds for any Lebesgue integrable function $f$. Suppose that $(1)$ does not hold true, then we can find $c>0$ and $R>0$ such that


$$x |f(x)| \geq c \quad \text{for all $x \geq R$}.$$


Thus,


$$\int_{x \geq R} |f(x)| \, \lambda(dx) \geq c \int_{x \geq R} \frac{1}{x} \, \lambda(dx) = \infty$$


in contradiction to our assumption that $f$ is integrable.


Remark: The proof actually shows that


$$\liminf_{x \to \infty}\big( h(x) |f(x)| \big) = 0$$


for any Lebesgue integrable function $f$ and any function $h: \mathbb{R} \to (0,\infty)$ such that


$$\int_{x \geq 1} \frac{1}{h(x)} \, \lambda(dx)=\infty,$$


e.g. $h(x) := x |\log x|$.



Friday, February 22, 2019

real analysis - Sum involving $cosh$ and $sinh$



I would like to prove the equation




$$\frac{\sinh\left(\left (1-\frac{1}{2m} \right)x\right)}{\sinh(x/2m)}=1+ \sum\limits_{n=1}^{m-1}2\cdot \cosh\left(\left( 1-n/m \right)x\right),\quad \forall x > 0, m\in\mathbb{N}.$$



where the sum is evaluated as $0$ for $m=1$.



But I do not see, how to approach this problem. Proof by induction seems not to be appropriate, and it does not seem to be a simple application of the standard addition formulas. Does anyone has an idea?



Best wishes


Answer



Substitute $x$ by $mx$. Write $\frac{1}{2}(e^x-e^{-x})$ instead of $\sinh(x)$ and $\frac{1}{2}(e^x+e^{-x})$ instead of $\cosh(x)$. Then you can calculate the right sum by using $\sum\limits_{n=1}^{m-1}z^n=z\frac{z^{m-1}-1}{z-1}$. At the end substitute $x$ by $\frac{x}{m}$.


linear algebra - Find a eigenvalues of matrix $A$ without using characteristic polynomial





Let $A$ be the following matrix:



$A=\begin{bmatrix}
4&1&-1\\
2&5& -2\\
1&1&2\\
\end{bmatrix}$



Find the eigenvalues of $A$ if you know that algebraic multiplicity of one eigenvalue is $2$. But you must not use characteristic polynomial.





I have no idea how to solve this, because if I use trace and determinant I still get polynomial with third degree so is still a characteristic polynomial. If I add $A^T$ on $A$ I get a symmetric matrix which is positive definite, so the eigenvalues are positive, so maybe I can use spectral theorem because $A+A^T$ is symmetric but I still need eigenvectors, so nothing from that. Do you know something?


Answer



Using the usual dodge of trying out a few simple linear combinations of the columns, one can quickly discover that by luck or by design, $(1,0,1)^T$ is an eigenvector with eigenvalue $3$. (I checked that combination first since the $2$ and $-2$ in the second row cancel.) Comparing this to $\operatorname{tr}A=11$, there are two possibilities: if $3$ is the double eigenvalue, then the other one must be 5; if the other eigenvalue is the double, then it must be $4$. Test these against $\det A$ to find the correct one.


Thursday, February 21, 2019

limits - Avoid L'hopital's rule




$$\lim_{x\to 0} {\ln(\cos x)\over \sin^2x} = ?$$



I can solve this by using L'Hopital's rule but how would I do this without this?


Answer



$$\frac{\log\left(\cos\left(x\right)\right)}{\sin^{2}\left(x\right)}=\frac{1}{2}\frac{\log\left(1-\sin^{2}\left(x\right)\right)}{\sin^{2}\left(x\right)}=-\frac{\sin^{2}\left(x\right)+O\left(\sin^{4}\left(x\right)\right)
}{2\sin^{2}\left(x\right)}\stackrel{x\rightarrow0}{\rightarrow}-\frac{1}{2}.$$


reference request - Is this divisibility test for 4 well-known?


It has just occurred to me that there is a very simple test to check if an integer is divisible by 4: take twice its tens place and add it to its ones place. If that number is divisible by 4, so is the original number.



This result seems like something that anybody with an elementary knowledge of modular arithmetic could realize, but I have noticed that it is conspicuously missing on many lists of divisibility tests (for example, see here, here, here, or here). Is this divisibility test well-known?


Answer



Yes, it is well known; do you know modular arithmetic? Assuming you do, we have a number $abc=a\cdot 10^2+b\cdot 10^1+c\cdot 10^0$. Now $$a\cdot 10^2+b\cdot 10^1+c\cdot 10^0\equiv 2\cdot b+c\pmod{4}.$$ Many people know the multiples of $4$ for numbers less than $100$, so it is commonly just said if the last two digits (as a number) is divisible by $4$, then the number is divisible by $4$.


sequences and series - How to derive the formula for the sum of the first $n$ perfect squares?

How do you derive the formula for the sum of the series when $S_n = \sum_{j=1}^n j^2$?



The relationship $(n, S_n)$ can be determined by a polynomial in $n$. You are supposed to use finite differences to determine the degree and then use a calculator to derive the function. But how?

algebra precalculus - Why does $ frac{1}{xsqrt{x}}= frac {sqrt{x}}{x^2} $?



A homework question recently asked for me to simplify:



$\frac{1}{\sqrt{7}} \div {7}$



It's easy to see that this becomes




$\frac{1}{7\sqrt{7}}$



But according to wolfram alpha this is also equal to $\frac{\sqrt{7}}{49}$.



What sequence of steps can I use to get the second representation of this quantity from the first?


Answer



$\dfrac{1}{\sqrt{7}}\div7=\dfrac{1}{7\sqrt{7}}=\dfrac{1}{7\sqrt{7}}\cdot\dfrac{\sqrt{7}}{\sqrt{7}}=\dots$


Wednesday, February 20, 2019

sequences and series - Different methods to compute $sumlimits_{k=1}^infty frac{1}{k^2}$ (Basel problem)

As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.$$ However, Euler was Euler and he gave other proofs.


I believe many of you know some nice proofs of this, can you please share it with us?

integration - A 'complicated' integral: $ int limits_{-infty}^{infty}frac{sin(x)}{x}$




I am calculating an integral $\displaystyle \int \limits_{-\infty}^{\infty}\dfrac{\sin(x)}{x}$ and I dont seem to be getting an answer.




When I integrate by parts twice, I get:
$$\displaystyle \int \limits _{-\infty}^{\infty}\frac{\sin(x)}{x}dx = \left[\frac{\sin(x)\ln(x) - \frac{\cos(x)}{x}}{2}\right ]_{-\infty}^{+\infty}$$



What will be the answer to that ?


Answer



Hint: From the viewpoint of improper Lebesgue integrals or in sense of Cauchy principal value is integral is legitimate. Integration by parts.
\begin{align}
\int \limits_{-\infty}^{\infty}\dfrac{\sin(x)}{x} \mbox{d} x
=
&

\lim_{t\to\infty}\int \limits_{-t}^{\frac{1}{t}}\dfrac{\sin(x)}{x} \mbox{d} x
+
\lim_{t\to\infty}\int \limits_{\frac{1}{t}}^{t}\dfrac{\sin(x)}{x} \mbox{d} x
\\
=
&
\lim_{t\to\infty}\int \limits_{-t}^{\frac{1}{t}}\sin(x)(\log x)^\prime \mbox{d} x
+
\lim_{t\to\infty}\int \limits_{\frac{1}{t}}^{t}\sin(x)(\log x)^\prime\mbox{d} x
\\

\end{align}


Theory of removing alternate digits



It has been many years since I noticed something bizarre in our number system. It has caught my eye since then. In the beginning, I discarded it as something irrelevant, obvious maybe and something that was a mere illusion.




Start with a number, any four digit number will explain best and easiest. For example $6173$ (any random number). First remove the alternate digits from right side, i.e. the number becomes $67$ (removing $3 $and $1$). Write$ 6173$ in front of it again. It becomes $617367$. Repeat the steps.
$$
1. 6173$$
$$67$$
$$2. 617367$$
$$676$$
$$3. 6173676$$
$$ 137$$
$$4. 6173137$$

$$ 133$$
$$5. 6173133$$
$$133$$



In four steps, we got a number 133 which will be repeated forever. I like to call this 'purest form of a number'. This is applicable to any digit number, any number till infinity.



Further, I observed that single digit numbers cannot be made any more 'pure'. They reach their purest form in one step. 2 digit numbers reach their purest form in 2 steps, 3 digit numbers in 4 steps, 4 digit numbers also in 4 steps. 5 digit ones in 5, 6 in 6, 7 in 6...



Writing them up in a series:
$$1, 2, 4, 4, 5, 6, 6...$$




Alternate odd digit numbers don't purify in as many number of steps.



Now,1 digit numbers have 1 digit pure number, 2 have 1, 3 have 2, 4 have 3, 5 have 4, 6 have 5, 7 have 6 and so on.



This pattern becomes:
$$1, 1, 2, 3, 4, 5, 6...$$



I studied removing alternate digits from the left side too. It is almost the same. Also, in the second last step of reaching the pure forms, the numbers get very very close in any instance. In the above example, 137 and 133.




My question is:



(a) Is this actually a good observation or just an obvious fact?
(b) In any case, why are we sticking to a number which doesn't change after a few steps?


Answer



For the number of digits, suppose your original number has $n$ digits and the pure number has $p$ digits. When you prepend the original number you get $n+p$ digits. If $p=n-1$ or $p=n$ you will then strike out $n$ digits, leaving you again with $p$. On the first strike you bring the number of digits below $n$ and can never get back. That justifies (with $n=1$ a special case) your number of digits observation.



Once you get to $p$ digits there are only $10^p$ numbers to choose from, so you will eventually cycle. We can easily find the only 1-cycle. We will show an example for a starting seven digit number. The starting number might as well be $0123456$. We know the pure number will have six digits. Let them be $abcdef$. When you prepend the original number you get $0123456abcdef$. After you strike out the digits you have $135ace$, which we can equate to $abcdef$. This gives $a=1, b=3, c=5, d=a=1, e=c=5, f=e=5$, for a final answer of $135155$ for seven digit numbers. You can do the same for any length you like.



Each number goes through two phases in getting to the pure number. The first is to grow the prepended version in length up to the required amount. That takes about $\log_2 n$ iterations. In the seven digit example it goes from $7$ to $10$ because you strike four the first time, then to $12$, then to $13$. After that the length stays constant and the numbers shift in from the left, again taking about $\log_2 n$ iterations. The first $n$ shift in the first time, then the next $n/2$ are shifted in, then the next $n/4$. The total number of iterations is about $2 \log_2 n$. I haven't thought about how the rounding affects this, which is where "about" comes from.




The pure number will start with the digits from the odd positions of the starting number, followed by the digits that are $1$ or $3 \pmod 4$ depending on whether $n$ is even or odd, and so on.


Tuesday, February 19, 2019

calculus - Show that $lim_{ntoinfty}frac{ln(n!)}{n} = +infty$




Show that
$$
\lim_{n\to\infty}\frac{\ln(n!)}{n} = +\infty

$$




The only way i've been able to show that is using Stirling's approximation:
$$
n! \sim\sqrt{2\pi n}\left(\frac{n}{e}\right)^n
$$



Let:
$$

\begin{cases}
x_n = \frac{\ln(n!)}{n}\\
n \in \Bbb N
\end{cases}
$$



So we may rewrite $x_n$ as:
$$
x_n \sim \frac{\ln(2\pi n)}{2n} + \frac{n\ln(\frac{n}{e})}{n}
$$




Now using the fact that $\lim(x_n + y_n) = \lim x_n + \lim y_n$ :
$$
\lim_{n\to\infty}x_n = \lim_{n\to\infty}\frac{\ln(2\pi n)}{2n} + \lim_{n\to\infty}\frac{n\ln(\frac{n}{e})}{n} = 0 + \infty
$$



I'm looking for another way to show this, since Stirling's approximation has not been introduced at the point where i took the exercise from yet.


Answer



Another way to show $$\lim_{n\to\infty}\frac{\ln(n!)}{n}=\infty$$ is to consider the following property of logarithms: $$\log(n!)=\log(n)+\log(n-1)+\cdots+\log(2)>\frac{n}{2}\log\left(\frac n2\right).$$ Now $$\frac{\log (n!)}{n}>\frac{\log(n/2)}{2}.$$ As $n\to\infty$, this clearly diverges to $+\infty$.


combinatorics - Choosing $n$ and $r$ so that $n choose r$ approximates $pi$



I came across a curious formula, while trying out different numbers in $n \choose r$.



$${7.5 \choose 7} \approx \pi $$



The occurrence of $\pi$ with factorials has been discussed before, such as is in Why is $\Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}$ ?



Using the gamma function or some other method, can we prove this approximate formula for $7.5 \choose 7$ ?




Also, is there any choice of $n$ and $r$ that yields $\pi$ exactly?


Answer



Not a coincidence!
$$\binom{7.5}{7}=\binom{7.5}{0.5} = \frac{\Gamma\left(\frac{17}{2}\right)}{\Gamma\left(\frac{3}{2}\right)\Gamma(8)}=\pi\cdot\left(\frac{16}{\pi\cdot 4^8}\binom{16}{8}\right)
$$
hence $\binom{7.5}{7}\approx \pi$ is equivalent to
$$ \frac{16}{\pi\cdot 4^8}\binom{16}{8}\approx 1$$
that is a consequence of
$$ \frac{1}{4^n}\binom{2n}{n}\approx\frac{1}{\sqrt{\pi n}},\qquad \frac{16}{\pi\sqrt{8\pi}}\approx 1 $$
so our approximation is essentially equivalent to $\pi^3\approx 32$, that follows from

$$ \frac{\pi^3}{32}=\sum_{n\geq 0}\frac{(-1)^n}{(2n+1)^3} $$
proved here. Actually, the last identity implies the tighter (and somewhat nicer) approximation
$$ \pi \approx 31^{1/3}. $$


Sunday, February 17, 2019

cardinals - Finding a bijection between set of infinite sequences of $0,1,2$ and the open interval $(0,1)$

I've been asked to find bijection between $\{0,1,2\}^\mathbb{N}$, the set of infinite sequences of $0,1$ and $2$, and the open interval $(0,1)$, and am told that it follows a similar logic to the proof creating a bijection between $10^\mathbb{N}=\{0,1,2,3,4,5,6,7,8,9\}$ and the open interval $(0,1)$.



I can find the bijection between $10^\mathbb{N}=\{0,1,2,3,4,5,6,7,8,9\}$ and $(0,1)$ by picking some $a \in 10^\mathbb{N}$ and showing that it converges to some number $b \in (0,1)$. Noticing that it is possible for two sequences to converge to the same number $b \in (0,1)$, such as $0.5$ and $0.4\bar{9}$, we create $A=$ "set of sequences with two representations" and $B=$ "numbers with 2 representations". I can then show that these two sets have a bijection between them without much difficulty.



What I do not understand is how to transfer this framework once we no longer have the ability to create an infinite sequence corresponding to any number in $(0,1)$ and I am struggling to find a function that allows me to do so. Any tips would be appreciated.

Induction proof for a summation: $sum_{i=1}^n i^3 = left[sum_{i=1}^n iright]^2$




Prove by induction: $\sum_{i=1}^n i^3 = \left[\sum_{i=1}^n i\right]^2$. Hint: Use $k(k+1)^2 = 2(k+1)\sum i$.



Basis: $n = 1$ $\sum_{i=1}^1 i^3 = \left[\sum_{i=1}^1 i\right]^2 \to 1^3 = 1^2 \to 1 = 1$.



Hypothesis: Assume true for all $n \le k$.



So far I have the following:




$$\sum_{i=1}^{k+1} i^3 = (k+1)^3 + \sum_{i=1}^k i^3$$



$$(k+1)^3 + \left[\sum_{i=1}^k i\right]^2$$


Answer



For $n=k+1$, $$\sum_{i=1}^{k+1}i^3 = \sum_{i=1}^{k}i^3+(k+1)^3=(\sum_{i=1}^{k}i)^2+(k+1)^3=(\sum_{i=1}^{k}i)^2+k(k+1)^2+(k+1)^2$$



Now using the Hint: $k(k+1)^2 = 2(k+1)\sum i$.



$$=(\sum_{i=1}^{k}i)^2+2(k+1)\sum_{i=1}^k i+(k+1)^2=(\sum_{i=1}^{k+1}i)^2$$


real analysis - If a function is continuous and differentiable everywhere is the derivative continuous?



Suppose $f$ is continuous on $[a,b]$ and differentiable on (a,b). Does it follow that $f'$ is continuous on $(a,b)$?


Answer



The function,




$$f(x)=\begin{cases}
x^2\sin\frac{1}{x} & \text{ if } x\neq 0 \\
0 & \text{ if } x= 0
\end{cases}$$



is diffrentiable on $\mathbb{R}$



But,



$$f'(x)=\begin{cases}

2x\sin\frac{1}{x}-\cos\frac{1}{x} & \text{ if } x\neq 0 \\
0 & \text{ if } x= 0
\end{cases}$$



Is not continuous on $x=0$, since $\lim_{x\to 0}\cos\frac{1}{x}$ is not exist.


combinatorics - x tickets, y prizes, can only win once - what is probability given distribution of tickets

Given this problem:




There is a draw for x prizes with y tickets sold. Each person can own 1 or more tickets but can only win 1 prize (i.e. Once 1 of their tickets wins all of their other tickets are void).



I have seen a similar question asked before but have not really seen a good answer.



If I know the number of people and the number of tickets that each person has how would I calculate the probability of winning a prize for each person?



For example:



6 prizes to be won
25 tickets sold
2 people have 4 tickets
2 people have 2 tickets
13 people have 1 ticket each




How do I calculate the probability that 1 of the people with 4 tickets will win a prize, given that each person can only win once.



My starting point was:



$1 - \frac{\binom{21}{6}}{\binom{25}{6}}$



which would tell me the probability of the person who has 4 tickets winning. However this is naive and is only correct when there are 21 people each with 1 ticket (the numerator in this fraction) whereas I have 13 people with 1, 2 people with 2 and 1 other person with 4 .



Because there are other people who have more than 1 ticket I need to calculate how many of the $\binom{21}{6}$ selections are invalid (i.e. contain more than 1 ticket for a player)




To do this I found all of the selections that didn't have more than 1 ticket for a player:



$\binom{13}{6} + \binom{13}{5}\binom{2}{1} + \binom{13}{5}\binom{2}{1} + \binom{13}{5}\binom{4}{1} + \binom{13}{4}\binom{2}{1}\binom{2}{1} + \binom{13}{4}\binom{4}{1}\binom{2}{1} + \binom{13}{4}\binom{4}{1}\binom{2}{1} + \binom{13}{3}\binom{4}{1}\binom{2}{1}\binom{2}{1} =30,888$



Then I found the selections which did have multiple tickets for the same player:



Selections with multiple player tickets = $\binom{21}{6} - 30,888 = 23,376$



and then I used this in my initial example:




$1 - \frac{30,888}{(\binom{26}{6} - 23,376)}$



which gives me a probability of 79.99% that one of the players with 4 tickets wins one of the 6 prizes.



I am not 100% sure I haven't made a mistake and Im also sure there must be a better way to calculate this.



Would be interested to know peoples ideas.



EDIT
My suggested solution is incorrect as it assumes that a selection containing more than 1 ticket for a person results in no prizes awarded (i.e. is invalid) but actually if there are 4 tickets drawn owned by different people and then the final 2 tickets are for the same person there are still 5 prizes awarded but there would be an extra ticket(s) selected to award the 6th prize. As comment below states the order is therefore important.

elementary number theory - Bad Fraction Reduction That Actually Works



$$\frac{16}{64}=\frac{1\rlap{/}6}{\rlap{/}64}=\frac{1}{4}$$




This is certainly not a correct technique for reducing fractions to lowest terms, but it happens to work in this case, and I believe there are other such examples. Is there a systematic way to generate examples of this kind of bad fraction reduction?


Answer



It's easy to find them all. Suppose $\rm\: (10\ a + n)/(10\ n + b) = a/b\:.\:$ Thus $\rm\ (10\ a-b)\ n = 9\:a\:b\:.$



Case 1: $\rm\:(9,n) = 1\::\ \: 9\ |\ 10\:a-b\ \Rightarrow\ 9\ |\ a-b\ \Rightarrow a=b\ \Rightarrow\ 9\:a\:n = 9\:a^2\ \Rightarrow\ n=a=b\: $ (trivial)



Case 2: $\rm\:(9,n) = 9\::\ 10\:a-b = a\:b\ \Rightarrow\ a|b,\ 10 = (b/a)\:(a+1)\:$ so $\rm\ a,b = 1,5\:$ or $\: 4,8\:$



which yields the solutions: $\:\ 19/95 = 1/5\:,\:$ and $ 49/98 = 1/2\:.\ $ Similar analysis of the remaining




Case 3: $\rm\: (9,n) = 3\::\: $ yields $\:16/64 = 1/4\:,\:$ and $\: 26/65 = 2/5\:.$


Saturday, February 16, 2019

Why does this pattern occur when using modular arithmetic against set of prime numbers?



I have been recently playing around with number theory and going through the project Euler problems. So I am very new to a lot of these things. I apologize for not knowing how to look up my answer. This is kinda a two part question I think.



First I created a list of prime numbers from 1 to ten million. Then I looped through this list up to about 350,000. In this loop I created a variable X for the prime number I was on, then another variable y for the for X + primeList[x]. each time I did this I calculated y mod x and saved that into a new list. I graphed that and got a strange pattern that I'm sure has to do with some basic concept of modular arithmetic that I just don't understand. I have included the screen shot of my graph below.



My python code:




for x in range(1,100000):
start = primeList[x]
mod = primeList[x+primeList[x]]
result = mod % start
primeModList.append(result)


As the second part of my question. I tried creating a pseudorandom list of numbers to kind of simulate the distributions of primes (I do understand this does not really work, but not sure of any other way to do it). I ran that same list through the same process and did not achieve the same results. Although if I increased my randomList[value] by the primeList numbers I did achieve the same result.




My code for this was:



for x in xrange(10000000):
n = randint(0,10000000)
randomList.append(n)

randomList.sort()

for x in range(1,100000):
start = randomList[x]

mod = randomList[x+randomList[x]]
result = mod % start
ranModList.append(result)


To reiterate I want to know why this pattern shows, and what major difference causes it to only show when increasing by number in my prime list and not my random list.



My graph of the prime number set in blue, and my set of my remainders in green.


Answer



Let $p_n$ be the $n$'th prime, and let $y_n = p_{n + p_{n}} \mod p_n$, i.e.

$p_{n+p_{n}} = x_n p_n + y_n$ where $0 \le y_n < p_n$ and $x_n = \lfloor p_{n+p_n}/p_n \rfloor$. Now the point is that
$r_n = p_{n+p_n}/p_n$ will tend to grow, but
slowly: $p_n \sim n \log n$, $p_{n+p_n} \sim (n + p_n) \log(n + p_n) \sim n (\log n)^2$, so $r_n \sim \log n$. $x_n = \lfloor r_n \rfloor$ will typically be constant for long intervals. For example, it is $12$ for
$3070 \le n \le 7073$. On an interval where $x_n = c$, $y_n = (r_n - c) p_n$
will tend to be increasing, from near $0$ at the beginning of the interval to near $p_n$ at the end.


linear algebra - Prove that the matrix A is diagonalizable if $-4bc < (a-d)^2$



I'm trying to prove that matrix A is diagonalizable
if $-4bc < (a - d)^2$ and is not diagonalizable if $-4bc > (a - d)^2$



$A = \begin{bmatrix}
a & b\\

c & d
\end{bmatrix}$



I'm not sure how to start. I was thinking of taking the inverse of A but I'm sure if that's the right approach.


Answer



I am assuming that you want to work in the field of real numbers. In this case you have the following



1) If the matrix is diagonalizable, then it must have both the roots real



2) If the matrix has eigenvalues that are not repeated, then it can be diagonalized with the following additional comment: If the eigenvalues are not real, the diagonalization would require complex numbers.




3) The only real matrices with repeated eigenvalues that is also diagonalizable are multiples of the identity matrix, i.e. diagonal matrices with equal values on the diagonal.



To OP:
In one of your comments you have
$$
\lambda^2-(a+b)\lambda+ad+bc$$
It should be
$$
\lambda^2-(a+d)\lambda+ad-bc$$

So for real roots that are distinct, you need
$$
(a+d)^2 -4(ad - bc) = a^2 + 2 a d + d^2 - 4 a d + 4bc = (a-d)^2 + 4 bc \gt 0$$
or
$$ (a-d)^2 \gt -4bc$$



Note $a=d, b=1, c=0$ is not diagonalizable. So you cannot change the $\gt$ to $\ge$.
Also, $a=d, b=c=0$ is diagonalizable. So
$$
\text{If }(a-d)^2 = -4bc, \text{ the matrix is diagonalizable if and only if $b=0$ and $c=0$}$$




** Added in reply to OP **



Let $\Delta$ be the discriminant



There are 3 cases




  1. $\Delta < 0$. Both the eigenvalues are complex so matrix can not be diagonalized.

  2. $\Delta >0$. Both eigenvalues are real and distinct (different). Matrix can be diagonalized


  3. $\Delta=0$. Both eigenvalues are equal. Then there are two cases: Matrix is diagonal, i.e. $b=c=0$. Matrix is not diagonal $b \ne 0$ or $c \ne 0$. In the first case the matrix is diagonalizable (it is already diagonal!). In the second case the matrix cannot be diagonalized.


Friday, February 15, 2019

abstract algebra - Why is $n_1 sqrt{2} +n_2 sqrt{3} + n_3 sqrt{5} + n_4 sqrt{7} $ never zero?

Here $n_i$ are integral numbers, and not all of them are zero.


It is natural to conjecture that similar statement holds for even more prime numbers. Namely,


$$ n_1 \sqrt{2} +n_2 \sqrt{3} + n_3 \sqrt{5} + n_4 \sqrt{7} + n_5 \sqrt{11} +n_6 \sqrt{13} $$ is never zero too.


I am asking because this is used in some numerical algorithm in physics

combinatorics - A generalized combinatorial identity for a sum of products of binomial coefficients


I have the following question.


For given natural numbers $n$ and $d$, let $a_1,a_2,..., a_r$ be fixed integers such that $a_1+\cdots+a_r=d$. Let $A=\{(i_1,..,i_r)~|~0\le i_j\le n~ \text{and}~ i_1+\cdots+i_r=n\}$.


$$\underset {(i_1,...,i_r)\in A} \sum {i_1\choose a_1}{i_2\choose a_2}\cdots{i_r\choose a_r}= {n+r-1\choose d+r-1}$$


Is the above combinatorial identity true ? I know that this is true when $r$ is $2$.


Answer



Yes, it is true. To show this, use induction on $r$. (As you say, it's true for $r=2$ and obviously, it's also true for $r=1$). Then


$$\sum_{(i_1,\ldots,i_r)\in A}\binom{i_1}{a_1}\cdots \binom{i_r}{a_r}=\sum_{i_r=0}^n\sum_{\stackrel{i_1+\cdots+i_r=n}{0\le i_1,\ldots,i_{r-1}\le n}}\binom{i_1}{a_1}\cdots \binom{i_{r-1}}{a_{r-1}}\binom{i_r}{a_r}=\sum_{i_r=0}^n\binom{i_r}{a_r}\sum_{\stackrel{i_1+\cdots+i_{r-1}=n-i_r}{0\le i_1,\ldots,i_{r-1}\le n}}\binom{i_1}{a_1}\cdots\binom{i_{r-1}}{a_{r-1}}.$$



Using the induction hypothesis, this equals


$$\sum_{i_r=0}^n\binom{i_r}{a_r}\binom{n-i_r+r-2}{d-a_r+r-2},$$


and then applying the case $r=2$ (i.e., when you have two summands $j_1+j_2=m$ and $b_1+b_2=e$) yields that the latter is equal to


$$\binom{n-i_r+r-2+i_r+(2-1)}{d-a_r+r-2+a_r+(2-1)}=\binom{n+r-1}{d+r-1}$$


as requested.



EDIT To clarify, the case $r=2$ is the statement


$$\sum_{\stackrel{a+b=m}{0\le a,b\le m}}\binom{a}{x}\binom{b}{y}=\binom{a+b+1}{x+y+1},$$


the LHS of which is equal to


$$\sum_{b=0}^m\binom{b}{x}\binom{m-b}{y}.$$


calculus - Does L'Hôpital's work the other way?




Hello fellows,



As referred in Wikipedia (see the specified criteria there), L'Hôpital's rule says,



$$
\lim_{x\to c}\frac{f(x)}{g(x)}=\lim_{x\to c}\frac{f'(x)}{g'(x)}
$$



As




$$
\lim_{x\to c}\frac{f'(x)}{g'(x)}=
\lim_{x\to c}\frac{\int f'(x)\ dx}{\int g'(x)\ dx}
$$



Just out of curiosity, can you integrate instead of taking a derivative?
Does



$$

\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int f(x)\ dx}{\int g(x)\ dx}
$$



work? (given the specifications in Wikipedia only the other way around: the function must be integrable by some method, etc.) When? Would it have any practical use? I hope this doesn't sound stupid, it just occurred to me, and I can't find the answer myself.





Edit




(In response to the comments and answers.)



Take 2 functions $f$ and $g$. When is



$$
\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int_x^c f(a)\ da}{\int_x^c g(a)\ da}
$$






true?



Not saying that it always works, however, it sometimes may help. Sometimes one can apply l'Hôpital's even when an indefinite form isn't reached. Maybe this only works on exceptional cases.



Most functions are simplified by taking their derivative, but it may happen by integration as well (say $\int \frac1{x^2}\ dx=-\frac1x+C$, that is simpler). In a few of those cases, integrating functions of both nominator and denominator may simplify.



What do those (hypothetical) functions have to make it work? And even in those cases, is is ever useful? How? Why/why not?


Answer



I recently came across a situation where it was useful to go through exactly this process, so (although I'm certainly late to the party) here's an application of L'Hôpital's rule in reverse:




We have a list of distinct real numbers $\{x_0,\dots, x_n\}$.
We define the $(n+1)$th nodal polynomial as
$$
\omega_{n+1}(x) = (x-x_0)(x-x_1)\cdots(x-x_n)
$$
Similarly, the $n$th nodal polynomial is
$$
\omega_n(x) = (x-x_0)\cdots (x-x_{n-1})
$$

Now, suppose we wanted to calculate $\omega_{n+1}'(x_i)/\omega_{n}'(x_i)$ when $0 \leq i \leq n-1$. Now, we could calculate $\omega_{n}'(x_i)$ and $\omega_{n+1}'(x_i)$ explicitly and go through some tedious algebra, or we could note that because these derivatives are non-zero, we have
$$
\frac{\omega_{n+1}'(x_i)}{\omega_{n}'(x_i)} =
\lim_{x\to x_i} \frac{\omega_{n+1}'(x)}{\omega_{n}'(x)} =
\lim_{x\to x_i} \frac{\omega_{n+1}(x)}{\omega_{n}(x)} =
\lim_{x\to x_i} (x-x_{n+1}) = x_i-x_{n}
$$
It is important that both $\omega_{n+1}$ and $\omega_n$ are zero at $x_i$, so that in applying L'Hôpital's rule, we intentionally produce an indeterminate form. It should be clear though that doing so allowed us to cancel factors and thus (perhaps surprisingly) saved us some work in the end.



So would this method have practical use? It certainly did for me!







PS: If anyone is wondering, this was a handy step in proving a recursive formula involving Newton's divided differences.


Thursday, February 14, 2019

divisibility - Simpler ways to show that $n^2$ divides a polynomial?



I want to show that $n^2 \mid P(n)$, where $$P(n) = \frac{n^2(n+1)^2(n+2)(n+3)}{48}$$ for every odd positive integer $n$. The approach I took involved showing that $\cfrac{P(n)}{n^2}$ is always an integer (for such $n$), but then I had to create a polynomial even more complex and then prove nine different cases. While it did provide a valid proof (as far as I know), I have a feeling it was more work than I needed.



So my question is: are there "simpler" proofs to this problem, and what are their approaches/methods? By simpler I roughly mean: prove less cases, reduce the problem to a simpler form, etc; basically a solution that takes up less "space" on paper. (I know that's not the best explanation, sorry!)




Thank you very much!


Answer



I'm not sure what you did, but showing that $$\frac{P(n)}{n^2}=\frac{(n+1)^2(n+2)(n+3)}{48}$$ is always an integer if $n$ is odd is quite straightforward. You just have to show that $(n+1)^2(n+2)(n+3)$ is always divisible by $48=2^4\cdot 3$. It's always divisible by $3$ since one of $n+1,n+2,$ and $n+3$ is a multiple of $3$.



The factors of $2$ are a little more complicated but not bad. Since $n$ is odd, $n+1$ and $n+3$ are even, so $(n+1)^2(n+3)$ gives at least $3$ factors of $2$. Moreover, one of $n+1$ and $n+3$ is a multiple of $4$, which gives one extra factor of $2$. So in total there are at least $4$ factors of $2$.



The moral here is that when thinking about divisibility questions, factor. We keep the numerator of $\frac{P(n)}{n^2}$ in its factored form, so we can identify the contributions from each individual factor. And to test divisibility by $48$, we split it into its prime factorization so we can look for each prime factor separately.


linear algebra - Characteristic polynomial for A



Prove that the characteristic polynomial for $A = \begin{bmatrix} B & 0 \\ 0 & C \end{bmatrix}$, where $B$ and $C$ are square matrices, is the product of the characteristic polynomials of $B$ and $C$.



My attempt is the following. Let $A \in M_{n\times n}(\mathbb{F})$ the polynomial $f(t)=\det(A-\lambda I_n)$ is the characteristic polynomial for $A$. I´m not sure if what should be done is to prove that the determinant of a matrix $A$ is equal to the product of its eigenvalues?




$$ A=\begin{bmatrix} B & 0 \\ 0 & C \end{bmatrix}, $$



$$ P_A(\lambda)=(A-\lambda I), $$
$$ P_A(\lambda)=\det(\begin{bmatrix}B-\lambda & 0 \\ 0 & C - \lambda \end{bmatrix}), $$
$$ P_A(\lambda)= (B-\lambda)(C-\lambda)-0, $$
$$ =BC-B\lambda -C\lambda +\lambda^2 $$
$$ P_A(\lambda)=\lambda^2 -\lambda(B+C) + BC. $$



Now, if I have $B \in M_{n\times n}(\mathbb{F})$ the polynomial $f(t)=\det(B-\lambda I_n)$ is the characteristic polynomial for $B$ and the same for $C$, $C \in M_{n\times n}(\mathbb{F})$ where its characteristic polynomial is $f(t)=\det(C-\lambda I_n)$.




The product of its characteristic polynomials is:



$$ (B-\lambda I_n)(C-\lambda I_n) $$
$$ (B-\lambda)(C-\lambda) $$
$$ =BC-B\lambda -C\lambda +\lambda^2 $$



But what do I do with the identity?



Please let me know if what I did is correct for the problem.



Answer



There's an error at the 3rd line of your computation: $\;\begin{bmatrix}B-\lambda & 0 \\ 0 & C - \lambda \end{bmatrix}$ is inconsistent: you cannot subtract a number to a matrix.
: they'll have size , say $p$ and $q$
Also, if $A$ is a square matrix of size $n$, $B$ and $C$ certainly are not: they'll have size, say $p$ and $q$ such that $p+q=n$.



Hint:



A block diagonal square matrix (with block square matrices):
$$M =\begin{bmatrix}P & 0 \\ 0 & Q \end{bmatrix}$$
has determinant:

$$\det M=\det P\cdot\det Q.$$


Wednesday, February 13, 2019

calculus - I don't know how to solve this math sequence

I'm new here and I need help with these sequences. I'm in my first calculus class and everything up to now is great except for the part with all the sequences and series, especially these two. I tried doing them myself, but I couldn't find the answer. My teacher gave us a document with 20 questions, and I did everything else, but I'm stuck on these questions about the convergence, limits, and sum. Other than that, I can manage to do everything else by myself, so it would be very appreciated if you could help me out.


FYI I translated these math questions from French, so it might not be perfectly translated. Thanks.


Question 1 :




$$a _ { 1 } = 2 , \quad a _ { n + 1 } = \frac { 1 } { 3 - a _ { n } }$$


a) Prove that the sequence is decreasing


b) find $\lim _ { n \rightarrow \infty } a _ { n }$


c) Prove $0 < a _ { n } \leq 2$


d) Does $\sum _ { n = 1 } ^ { \infty } a _ { n + 1 }$ converges?



Thank you.

elementary number theory - What is the correct way of expressiong remainder in Mathematics



$45/7 \quad$ remainder $=3$




What is the correct way of representing this mathematically? I am asking this question because in this site, many times, experts use different ways to denote remainders. I am giving it below



(a) $45$ mod $7$ $=3$



(b) $45$ mod $7$ $\equiv3$



(c) $45\%7$ $=3$ (I believe this is mostly for programming and cannot generally use for mathematics. there is a thread for it)



(d) $45\equiv 3\pmod 7$




It is true that we can easily understand from the last expression that $45$ divided by $7$ gives $3$ as remainder. But, this relation is actually used to tell $45$ and $3$ gives same remainder when divided with $7$.



So, my understanding is that we can only (a). Please tell if I am right or wrong.


Answer



To capture the nature of division of a number $a$ by another number $b$ (which seems to be what you're trying to convey in $a$, we can write $$a = qb + r$$ where q represents the unique quotient, and $r$ ($0\leq r\lt b$) represents the unique remainder.



We can also write $$a \equiv r \pmod b$$



The notation of the second form does not necessarily require that the '$r$' be such that $0\leq r \leq b$.



linear algebra - Finding characteristic polynomial of adjacency matrix




Short question im having a tad difficulty with.



I'm trying to find the characteristic polynomial of a graph that is just a circle with n vertices and n edges.



I think the adjacency matrix should look something like this:



$\begin{pmatrix} 0 & 1 & 0 & 0 & \cdots & 1 \\[2mm]
1 & 0 & 1 & 0 & \cdots & 0 \\[2mm]
0 & 1 & 0 & 1 & \cdots & 0 \\

0 & 0 & 1 & 0 & \ddots & \vdots\\
\vdots & \vdots & \vdots& \ddots& \ddots& 1 \\[2mm]
1 & 0 & 0 & \cdots& 1 & 0\end{pmatrix}$



How do I find the characteristic polynomial of this matrix? The determinant is very difficult to calculate.


Answer



Your question is answered here (proposition 4):
http://www.math.cornell.edu/~levine/18.312/alg-comb-lecture-18.pdf



Instead of finding the determinant of the adjacency matrix of the cycle graph, we try to find the eigenvalues of the square matrix. To that end, we turn the problem into solving a linear recurrence.




Edit: Thanks to Marc's helpful comments, the notes linked considers the adjacency matrix of a path graph whereas the OP's looking at the adjacency matrix of a cycle graph. The method of using linear recurrences to find the eigenvalues, however, is the same in both cases.


real analysis - Provided $f$ is continuous at $x_0$, and $f(x+y) = f(x) + f(y)$ prove $f$ is continuous everywhere.

My attempt...






By definition, whenever $|x- x_0| < \delta$ we have $|f(x) - f(x_0)| < \epsilon$. Observing that




\begin{align}
|f(x) - f(y)| &= |f(x -x_0 + x_0) + f(y)| = |f(x-x_0) + f(x_0) - f(y)| \newline \newline
&\leq \epsilon + |f(y) - f(x_0)| = \epsilon + |f(y-x_0)|...
\end{align}






Here I need to choose a delta that can depends on $\epsilon$ and $y$ s.t. whenever $0<|x-y|< \delta$ then the above inequality is bounded by any $\epsilon$.



I'm also, in general, having trouble understanding this concept of continuity on an interval. I believe the structure of the definition is: for any $\epsilon> 0$ and any number $y$ in the interval, there exists a $\delta$ that depends on $\epsilon$ and $y$ such that for all $x$ in the interval and $|x - y | < \delta$ then $|f(x) - f(y)| < \epsilon$.




This definition makes me tempted to just choose y to be in the same delta neighborhood as $x$ in the given statement, but that constricts continuity to a small interval.






Edit: This question assumes no knowledge of Lebesgue measure theory.

Tuesday, February 12, 2019

notation - How do you write a division of a division when it can't be simplified?



I have an equation that cannot be simplified, but looks ugly written out.



$N^y_j = \alpha_{y_j} + \Bigl\lfloor\frac{P^y_i}{\Bigl\lfloor{\frac{P^y_i}{N^y_j}\Bigr\rfloor}}\Bigr\rfloor$



I thought to use a one-line divide in the divisor which looks nicer aesthetically, but I'm not convinced:




$N^y_j = \alpha_{y_j} + \Bigl\lfloor\frac{P^y_i}{\lfloor{P^y_i \div N^y_j\rfloor}}\Bigr\rfloor$



What is the recommended way to write such an equation, please? Or can the equation be restructured somehow?


Answer



If $P$ and $N$ are natural number sequences, then you can use this instead:



$$N^y_j=\alpha_{y_j}+\Bigl\lfloor\frac{P^y_iN^y_j}{P^y_i-P^y_i\bmod{N^y_j}}\Bigr\rfloor$$


Last Value of an Arithmetic Sequence of a Particular Sum


I was able to derive the formula for summing consecutive integers:


sum $= \dfrac{n(n + 1)}{2} \Longrightarrow n = 4$, sum $= 10$


Nothing difficult there, but then I would like a formula for giving me the length/end point of an arithmetic sequence, starting with $1$, for a known sum of that sequence. I have tried, but my algebra skills are very rusty.


I have so far, for a sum of $10$, this:


$n^2 + n = 20$


What is the formula for finding the length of, or end point of, an arithmetic sequence, given the sum? If the sum is $10$, I would like the formula to return $4$. If the sum is $561$, then it should return $33$.



Thank you.


Answer



The equation has to be rearranged. Let X the sum. The equation is


$X=\frac{n\cdot (n+1)}{2} \quad | \cdot 2$


$2X=n^2+n \quad | -2X$


$ \color{red}1n^2+\color{red}1n\color{red}{-2X}=0$


Solving for n. Applying the quadratic formula for solving a quadratic equation.


The values for the parameters are: $a=1$, $b=1$ and $c=-2X$


$n_{1/2}=\frac{-1\pm \sqrt{1-4\cdot (-2X)}}{2}=\frac{-1\pm \sqrt{1+4\cdot 2X}}{2}$


You need only the positive value.



Thus $n_1=\boxed{n=\frac{-1+ \sqrt{1+8X}}{2}}$


calculus - If the function is continuous at some point, then is it necessary that limit should exist at that point?

If the function is continuous at some point, then is it necessary that limit should exists at that point?




Like in case of $\sqrt{x}$, its continuous at $x=0$, but limit doesn't exist as left hand limit is not defined.



But it appears meaningless to say that function is continuous at some point, but limit doesn't exist at that point.



Any inputs?

algebra precalculus - Pre Calculus Question

The problem is a precalculus problem.



$$\frac{\large\frac{1}{1+x+h} - \frac{1}{1+x}}{h}$$



I was wondering if I can use the distributive property by dividing out the denominators in the numerator. My plan was to solve the fraction in the numerator then multiply by the reciprocal. However, since the denominator in the numerator is a sum, I am unsure as how to do it.




I know I could simplify / reduce the expression by dividing the polynomial by a monomial, but I am just confused as of the steps and concept. Can someone clarify?



Regards,



Math student

Monday, February 11, 2019

complex analysis - Definite integral calculation with poles at 0 and $pm isqrt{3}$

$$\int_0^\infty \frac{\sin(2\pi x)}{x(x^{2}+3)}$$


I looked at $\frac{e^{2\pi i z}}{z^{3}+3z}$, also calculated the residues, but they don't get me the right answer. I used that $\int_{-\infty}^\infty f(z)dz = 2\pi i (\sum \operatorname{Res} z_{r}) + \pi i Res_{0}$, but my answer turns out wrong when I check with wolframalpha.


Residue for $0$ is $1$, for $z=\sqrt{3}i$ it's $-\frac{e^{-2\pi}}{2}$ . . .


In a worse attempt I forgot $2\pi$ and used $z$ only (i.e. $\frac{e^{iz}}{z^{3}+3z}$) and the result was a little closer, but missing a factor of 2 and and $i$.


Can anyone see the right way? Please do tell.

combinatorics - Another Hockey Stick Identity


I know this question has been asked before and has been answered here and here.


I have a slightly different formulation of the Hockey Stick Identity and would like some help with a combinatorial argument to prove it. First I have this statement to prove: $$ \sum_{i=0}^r\binom{n+i-1}{i}=\binom{n+r}{r}. $$ I already have an algebraic solution here using the Pascal Identity: $$ \begin{align*} \binom{n+r}{r}&=\binom{n+r-1}{r}+\binom{n+r-1}{r-1}\\ &=\binom{n+r-1}{r}+\left[\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-1)-1}{r-2}\right]\\ &=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\left[\binom{n+(r-2)-1}{r-2}+\binom{n+(r-2)-1}{(r-2)-1}\right]\\ &\,\,\,\vdots\\ &=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-2)-1}{(r-2)-1}+\binom{n+(r-3)-1}{r-3}+\cdots+\left[\binom{n+1-1}{1}+\binom{n+1-1}{0}\right]\\ &=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-2)-1}{(r-2)-1}+\binom{n+(r-3)-1}{r-3}+\cdots+\binom{n+1-1}{1}+\binom{n-1}{0}\\ &=\sum_{i=0}^r\binom{n+i-1}{i}. \end{align*} $$


I have read both combinatorial proofs in the referenced answers above, but I cannot figure out how to alter the combinatorial arguments to suit my formulation of the Hockey Stick Identity. Basically, this formulation gives the "other" hockey stick. Any ideas out there?



Answer



Note that $\binom{n+r}{r}=\binom{n+r}{n}$ is the number of subsets of $\{1,2,\ldots,n+r\}$ of size $n$. On the other hand, for $i=0,1,2,\ldots,r$, $\binom{n+i-1}{i}=\binom{n+i-1}{n-1}$ is the number of subsets of $\{1,2,\ldots,n+r\}$ of size $n$ whose largest element is $n+i$.


calculus - Evaluating $limlimits_{ntoinfty} e^{-n} sumlimits_{k=0}^{n} frac{n^k}{k!}$



I'm supposed to calculate:




$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$



By using W|A, i may guess that the limit is $\frac{1}{2}$ that is a pretty interesting and nice result. I wonder in which ways we may approach it.


Answer



Edited. I justified the application of the dominated convergence theorem.



By a simple calculation,



$$ \begin{align*}
e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!}

&= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\
(1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\
&= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\
(2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\
&= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\
(3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du.
\end{align*}$$



We remark that





  1. In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$.

  2. In $\text{(2)}$, the substitution $t + n \mapsto t$ is used.

  3. In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used.



Then in view of the Stirling's formula, it suffices to show that



$$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$




The idea is to introduce the function



$$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$



and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then



$$ \log g_n (u)
= n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u
= -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$




From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral,



$$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$


calculus - Convergence of $sumfrac{|cos n|}{nlog n}$



I wonder if the series $$\sum_{n=1}^\infty\frac{|\cos n|}{n\log n}$$ converges.



I tried to applying the condensation test, getting $$\sum\frac{2^n|\cos 2^n|}{2^n\log{2^n}}=\sum\frac{|\cos 2^n|}{n\log 2}$$ but I don't know how to show it converges?




Am I in the right way?


Answer



Note that



$$|\cos n| \geqslant \cos^2 n = \frac1{2} + \frac1{2}\cos(2n).$$



Hence



$$\sum_{k=2}^n \frac{|\cos k|}{k \log k}\geqslant \frac1{2}\sum_{k=2}^n \frac{1}{k \log k}+\frac1{2}\sum_{k=2}^n \frac{\cos(2k)}{k \log k}.$$




The series on the LHS diverges because the first sum on the RHS diverges by the integral test and the second sum converges by Dirichlet's test.


Intuition behind logarithm inequality: $1 - frac1x leq log x leq x-1$


One of fundamental inequalities on logarithm is: $$ 1 - \frac1x \leq \log x \leq x-1 \quad\text{for all $x > 0$},$$ which you may prefer write in the form of $$ \frac{x}{1+x} \leq \log{(1+x)} \leq x \quad\text{for all $x > -1$}.$$



The upper bound is very intuitive -- it's easy to derive from Taylor series as follows: $$ \log(1+x) = \sum_{i=1}^\infty (-1)^{n+1}\frac{x^n}{n} \leq (-1)^{1+1}\frac{x^1}{1} = x.$$


My question is: "what is the intuition behind the lower bound?" I know how to prove the lower bound of $\log (1+x)$ (maybe by checking the derivative of the function $f(x) = \frac{x}{1+x}-\log(1+x)$ and showing it's decreasing) but I'm curious how one can obtain this kind of lower bound. My ultimate goal is to come up with a new lower bound on some logarithm-related function, and I'd like to apply the intuition behind the standard logarithm lower-bound to my setting.


Answer



Take the upper bound: $$ \ln {x} \leq x-1 $$ Apply it to $1/x$: $$ \ln \frac{1}{x} \leq \frac{1}{x} - 1 $$ This is the same as $$ \ln x \geq 1 - \frac{1}{x}. $$


Sunday, February 10, 2019

integration - Show that $intlimits_2^{+infty}frac{sin{x}}{xln{x}}, rm dx$ is conditionally convergent



The integral $\displaystyle\int_{2}^{+\infty}\frac{\sin{x}}{x\ln{x}}\,dx$ is conditionally convergent.



I know that $\displaystyle\int_{2}^{+\infty}\frac{\sin{x}}{x}\, dx$
is conditionally convergent and $ {\forall}p > 1$,
$\displaystyle\int_{2}^{+\infty}\frac{\sin{x}}{x^p}\, dx$ is absolute convergent,
but $\ln{x}$ is between $x$ and $x^p$, so how to prove that $\displaystyle\int_{2}^{+\infty}\frac{\sin{x}}{x\ln{x}}\, dx$ is conditionally convergent?



Answer



This is roughly integral analogue of the alternating series test. Since proving its generalization cause little harm, let me actually show




Proposition 1. Suppose that $f : [a, \infty) \to \mathbb{R}$ satisfies the following two conditions:




  1. $f$ is monotone-decreasing, i.e., $f(x) \geq f(y)$ for all $a \leq x \leq y$.

  2. $\lim_{x\to\infty} f(x) = 0$.




Then



$$ \int_{a}^{\infty} f(x)\sin(x) \, \mathrm{d}x = \lim_{b\to\infty} \int_{a}^{b} f(x)\sin(x) \, \mathrm{d}x $$



converges. Moreover, this integral is absolutely convergent if and only if $\int_{a}^{\infty} f(x) \, \mathrm{d}x < \infty$.




The proof is quite simple. We first prove that the integral converges. Let $n$ be an integer so that $\pi n \geq a$. Then for $ b \geq \pi n$,




\begin{align*}
\int_{a}^{b} f(x)\sin(x) \, \mathrm{d}x
&= \int_{a}^{\pi n} f(x)\sin(x) \, \mathrm{d}x + \sum_{k=n}^{\lfloor b/\pi\rfloor - 1} \int_{\pi k}^{\pi(k+1)} f(x)\sin(x) \, \mathrm{d}x \\
&\quad + \int_{\pi\lfloor b/\pi\rfloor}^{b} f(x)\sin(x) \, \mathrm{d}x.
\end{align*}



Writing $N = \lfloor b/\pi \rfloor$ and defining $a_k$ by $a_k = \int_{0}^{\pi} f(x+\pi k)\sin(x) \, \mathrm{d}x$, we find that




  1. $a_k \geq 0$, since $f(x+\pi k) \geq 0$ for all $x \in [0, \pi]$.



  2. $a_{k+1} \geq a_k$ since $f(x+\pi k) \geq f(x+\pi(k+1))$ for all $x \in [0, \pi]$.


  3. $a_k \to 0$ as $k\to\infty$, since $a_k \leq \int_{0}^{\pi} f(\pi k) \sin (x) \, \mathrm{d}x = 2f(\pi k) \to 0$ as $k \to \infty$.


  4. Bu a similar computation as in step 3, we check that $\left| \int_{\pi N}^{b} f(x) \sin (x) \, \mathrm{d}x \right| \leq 2f(\pi N)$, and so, $\int_{\pi N}^{b} f(x) \sin (x) \, \mathrm{d}x \to 0$ as $b\to\infty$.


  5. We have



    $$ \sum_{k=n}^{N - 1} \int_{\pi k}^{\pi(k+1)} f(x)\sin(x) \, \mathrm{d}x = \sum_{k=n}^{N-1} (-1)^k a_k. $$



    So, by the alternating series test, this converges as $N\to\infty$, hence as $b \to \infty$.





Combining altogether, it follows that $\int_{a}^{b} f(x)\sin(x) \, \mathrm{d}x $ converges as $b\to\infty$.



To show the second assertion, let $n$ still be an integer with $\pi n \geq a$. Then for $k \geq n$, integrating each side of the inequality $f(\pi(k+1))|\sin x| \leq f(x)|\sin x| \leq f(\pi k)|\sin x|$ for $x \in [\pi k, \pi(k+1)]$ gives



$$ 2f(\pi(k+1))
\leq \int_{\pi k}^{\pi(k+1)} f(x)|\sin(x)| \, \mathrm{d}x
\leq 2f(\pi k) $$



and similar argument shows




$$ \pi f(\pi(k+1))
\leq \int_{\pi k}^{\pi(k+1)} f(x) \, \mathrm{d}x
\leq \pi f(\pi k). $$



From this, we easily check that



$$ \frac{2}{\pi} \int_{\pi(n+1)}^{\infty} f(x) \, \mathrm{d}x
\leq \int_{\pi n}^{\infty} f(x)|\sin x| \, \mathrm{d}x
\leq 2f(\pi n) + \frac{2}{\pi} \int_{\pi(n+1)}^{\infty} f(x) \, \mathrm{d}x. $$




Therefore the second assertion follows.


real analysis - How do I prove that $f(x)f(y)=f(x+y)$ implies that $f(x)=e^{cx}$, assuming f is continuous and not zero?



This is part of a homework assignment for a real analysis course taught out of "Baby Rudin." Just looking for a push in the right direction, not a full-blown solution. We are to suppose that $f(x)f(y)=f(x+y)$ for all real x and y, and that f is continuous and not zero. The first part of this question let me assume differentiability as well, and I was able to compose it with the natural log and take the derivative to prove that $f(x)=e^{cx}$ where c is a real constant. I'm having a little more trouble only assuming continuity; I'm currently trying to prove that f is differentiable at zero, and hence all real numbers. Is this an approach worth taking?


Answer




First note that $f(x) > 0$, for all $x \in \mathbb{R}$. This can be seen from the fact that
$$f(x) = f\left(\dfrac{x}2 + \dfrac{x}2\right) = f \left(\dfrac{x}2\right)^2$$ Further, you can eliminate the case $f(x) = 0$, since this would mean $f \equiv 0$.



One way to go about is as follows.



$1$. Prove that $f(m) = f(1)^m$ for $m \in \mathbb{Z}^+$.



$2$. Now prove that $f(m) = f(1)^m$ for $m \in \mathbb{Z}$.



$3$. Now prove that $f(p/q) = f(1)^{p/q}$ for $p \in \mathbb{Z}$ and $q \in \mathbb{Z} \backslash \{0\}$.




$4$. Now make use of the fact that rationals are dense in $\mathbb{R}$ and hence you can find a sequence of rationals $r_n \in \mathbb{R}$ such that $r_n \to r \in \mathbb{R} \backslash \mathbb{Q}$. Now use continuity to conclude that $f(x) = f(1)^x$ for all $x \in \mathbb{R}$. You will see that you need only continuity at one point to conclude that $f(x) = f(1)^x$.


Saturday, February 9, 2019

proof writing - How do we know all numbers alternate between odd and even numbers?



Basically, I tried proving that multiplying an odd and even number together gives you an even number to a friend of mine. This is what I said.



Let's first take a random number $k$. It doesn't matter if its odd or even. If we then multiply this by $2$, we get an even number (from the definition of a even number, which is basically a number that can be divided by $2$). Now we know that in the number line, it goes odd, even, odd, even,..., and so if we add $1$ to this, then we get an odd number. We now have an odd number and an even number and so lets multiply them.



$$2k \cdot (2k + 1) = 2k(2k + 1) = 4k^2 + 2k) = 2(2k^2 + k)$$



We now have a random number $(2k^2 + k)$ multiplied by $2$ and so this is an even number (by definition of an even number).




My friend then said that we make the assumption that $2k \pm 1$ is odd because the number line alternates between odd and even numbers, how do know that this is true?



I was thinking how to prove this. Would it be by some form of induction?



EDIT: I think I need to change "number" to "integer" in this post don't I?


Answer



I wouldn't appeal to the fact that the integers alternate between even and odd.



I would say this:




An even number is defined as any integer of the form $2k$ for $k$ any integer (maybe excluding $k=0$).



An odd number is defined as any integer of the form $2k +1$ for any integer $k$.



You can just take this as the definition. And then (as you have) the proof that even times odd is even is just
$$
2k(2k'+1) = 2[k(2k'+1)]
$$
where of course $k(2k'+1)$ is just an integer.



real analysis - Prove that a function whose derivative is bounded is uniformly continuous.



Suppose that $f$ is a real-valued function on $\Bbb R$ whose derivative exists at each point and is bounded. Prove that $f$ is uniformly continuous.


Answer



Since $f'$ is bounded then there's $M>0$ s.t.
$$|f'(x)|\leq M\quad \forall x\in\mathbb{R}$$
hence by mean value theorem we find
$$|f(x)-f(y)|\leq M|x-y|\quad \forall x,y\in\mathbb{R}$$
so $f$ is a lipschitzian function on $\mathbb{R}$ and therefore it's s uniformly continuous on $\mathbb{R}$.


How to work out this easy fraction?



I need help working out this fraction, I know it seems quite easy but I'm a bit stuck.



The question is:



$$\frac{\frac {5}{2}}{\frac{5}{9}}$$




My attempt was changing the denominators by multiplying by $2$ to make $18$ and then changing the same way the nominators:



$$\frac{\frac {45}{18}}{\frac{10}{18}}$$



The thing is that I'm not sure how to simplify it as no numbers go into all of them. Thank you, I know this is quite easy.



Question: Why is it considered a division if the easiest solution is by multiplication?


Answer



Remember that dividing by a fraction is the same as multiplying by its inverse.




In this case



$$
\frac52 \Big/ \frac59 = \frac52 \times \frac95 = \frac92
$$
because the $5$ cancels.


Friday, February 8, 2019

linear algebra - How are these cross-product summations equivalent?




Trying to determine how the $X_{i+1}$ is no longer applicable by changing summation bounds:



$$\sum_{i=0}^{n-1} (X_{i} + X_{i+1})(Y_{i+1} - Y_{i}) = \sum_{i=1}^{n} X_{i}(Y_{i+1} - Y_{i - 1})$$



Can somebody explain algebraically how this is derived from the bounds?



The source is found here, scroll down to "Polygons, 2D Polygons".


Answer



There was a previously deleted answer which was almost correct, which I have fixed and reproduced below:




\begin{eqnarray}
&&\sum_{i=0}^{n-1} (X_{i} + X_{i+1})(Y_{i+1} - Y_{i})\\
&=&\sum_{i=0}^{n-1}X_{i}(Y_{i+1} - Y_{i})+\sum_{i=0}^{n-1} X_{i+1}(Y_{i+1} - Y_{i})\\
&=&\sum_{i=0}^{n-1}X_{i}(Y_{i+1} - Y_{i})+\sum_{i=1}^{n} X_{i}(Y_{i} - Y_{i-1})\\
&=&\sum_{i=1}^{n}X_{i}(Y_{i+1} - Y_{i})+\sum_{i=1}^{n} X_{i}(Y_{i} - Y_{i-1})\\
&=&\sum_{i=1}^{n}X_{i}(Y_{i+1} - Y_{i-1})
\end{eqnarray}
The third equality follows since $X_0(Y_1-Y_0)=X_n(Y_{n+1}-Y_n)$ (as the indices are modulo $n$), so removing the $i=0$ term and adding in the $i=n$ term does not change the sum.


decimal expansion - Is $0.9999...$ an integer?





Just out of curiosity, since
$$\sum_{i>0}\frac{9\times10^{i-1}}{10^i}, \quad\text{ or }\quad 0.999\ldots=1,$$
Does that mean $0.999\ldots=1$, or in other words, that $0.999\ldots$ is an integer, by applying the transitive property?



Ty.


Answer




$0.999999\ldots$ is indeed $1$, which indeed is a natural number and therefore an integer.


Thursday, February 7, 2019

set theory - About a paper of Zermelo


This about the famous article


Zermelo, E., Beweis, daß jede Menge wohlgeordnet werden kann, Math. Ann. 59 (4), 514–516 (1904),



available here. Edit: Springer link to the original (OCR'ed, may be behind a paywall)


An English translation can be found in the book Collected Works/Gesammelte Werke by Ernst Zermelo. Alternative source: the book From Frege to Gödel: a source book in mathematical logic, 1879-1931, by Jean Van Heijenoort.


[See also this interesting text by Dan Grayson.]


I don't understand the paragraph on the last page whose English translation is



Accordingly, to every covering $\gamma$ there corresponds a definite well-ordering of the set $M$, even if the well-orderings that correspond to two distinct coverings are not always themselves distinct. There must at any rate exist at least one such well-ordering, and every set for which the totality of subsets, and so on, is meaningful may be regarded as well-ordered and its cardinality as an "aleph". It therefore follows that, for every transfinite cardinality, $$\mathfrak m=2\mathfrak m=\aleph_0\,\mathfrak m=\mathfrak m^2,\mbox{and so forth;}$$ and any two sets are "comparable"; that is, one of them can always be mapped one-to-one onto the other or one of its parts.



It seems to me Zermelo says that the fact that any set can be well-ordered immediately implies that any infinite cardinal equals its square. Is this interpretation correct? If it is, what is the argument?


Side question: What is, in a nutshell, the history of the statement that any infinite cardinal equals its square? Where was it stated for the first time? Where was it proved for the first time? (In a similar vein: where was the comparability of any two cardinal numbers proved for the first time?)



Edit: German original of the passage in question:




Somit entspricht jeder Belegung $\gamma$ eine ganz bestimmte Wohlordnung der Menge $M$, wenn auch nicht zwei verschiedenen Belegungen immer verschiedene. Jedenfalls muß es mindestens eine solche Wohlordnung geben, und jede Menge, für welche die Gesamtheit der Teilmengen usw. einen Sinn hat, darf als eine wohlgeordnete, ihre Mächtigkeit als ein „Alef“ betrachtet werden. So folgt also für jede transfinite Mächtigkeit $$\mathfrak m=2\mathfrak m=\aleph_0\,\mathfrak m=\mathfrak m^2\text{ usw.,}$$ und je zwei Mengen sind miteinander „vergleichbar“, d. h. es ist immer die eine ein-eindeutig abbildbar auf die andere oder einen ihrer Teile.



Answer



The following is a theorem of $ZF$: 


The axiom of choice holds if and only if for every infinite set $A$, there exists a bijection of $A$ with $A\times A$. (i.e. $|A|=|A|^2$) 



Let us overview the theorem of Zermelo, namely if the axiom of choice holds then $\kappa=\kappa^2$ for every infinite $\kappa$.


This is fairly simple, by the canonical well ordering of pairs.


Consider $\alpha\times\beta$, this can be well ordered as ordinal multiplication (that is $\beta$ copies of $\alpha$, i.e. lexicographical ordering), or it can be ordered as following:


$$(x,y)<(w,z)\iff\begin{cases} \max\{x,y\}<\max\{w,z\}\\ \max\{x,y\}=\max\{w,z\}\land x

This is a well-ordering (can you see why?). Now we will prove that $\kappa\times\kappa$ has the same order type as $\kappa$, this is a proof that the two sets have the same cardinality, since similar order types induce a bijection.


Firstly, it is obvious that $\kappa$ is at most of the order type of $\kappa\times\kappa$ since the order type of $\kappa$ can be simply be written as $\alpha\mapsto (\alpha,\alpha)$. The other direction we prove by induction on $\alpha$ that for the initial ordinal $\omega_\alpha$ it is true: $\omega_\alpha=\omega_\alpha\times\omega_\alpha$.


Fact: If $\delta<\omega_\alpha$ (where $\omega_\alpha$ is the $\alpha$-th initial ordinal) then $|\delta|<\aleph_\alpha$.


The claim is true for $\omega_0=\omega$ since for any $k$ the set $\{(n,m)\mid (n,m)<(k,k)\}$ is finite. Therefore the order type of $\omega\times\omega$ is the supremum of $\{k_n\mid n\in\omega\}$ and $k_n$ are finite. Simply put, the order type is $\omega$.


Now assume (by contradiction) $\alpha$ was the least ordinal such that $\omega_\alpha$ was a counterexample to this claim, i.e. $\omega_\alpha$ is strictly less than the order type of $\omega_\alpha\times\omega_\alpha$.


Let $(\gamma,\beta)<\omega_\alpha\times\omega_\alpha$ be the pair of ordinals such that the order type of $\{(\xi,\zeta)\mid (\xi,\zeta)<(\gamma,\beta)\}$ is $\omega_\alpha$.


Take $\delta$ such that $\omega_\alpha>\delta>\max\{\gamma,\beta\}$ then $(\gamma,\beta)<(\delta,\delta)$ and in particular $\{(\xi,\zeta)\mid (\xi,\zeta)<(\delta,\delta)\}$ has cardinality of at least $\omega_\alpha$, as it extends a well order of the type $\omega_\alpha$.


However, $\delta<\omega_\alpha$ by the fact above it is of smaller cardinality, and thus that set has the cardinality $|\delta|\times |\delta|=|\delta|<\omega_\alpha$ by our induction assumption. Hence, a contradiction.



The other direction, also known as Tarski's theorem (I managed to find that it was published around 1923, but I could not find a proper reference.) is as follows:


Suppose that for all infinite $A$, there exists a bijection of $A$ with $A\times A$ then the axiom of choice holds.



The proof (which I will not bring here, as it would require a few more notations and definitions - I did give it here) uses the concept of Hartogs number (the least ordinal which cannot be injected into $A$). The proof in its essence is:


If $\aleph(A)$ is the Hartog of $A$, $$A+\aleph(A)=(A+\aleph(A))^2=A^2+2\cdot A\cdot\aleph(A)+\aleph(A)^2\ge A\cdot\aleph(A)\ge A+\aleph(A)$$


We then use (or prove) a theorem that if $A+\aleph(A)=A\cdot\aleph(A)$ then $A$ can be well ordered.


Historically, Tarski came to publish this theorem. It was rejected at first. Polish-American mathematician Jan Mycielsi relates in his article A System of Axioms of Set Theory for the Rationalists Notices AMS, February 2006, p.209:



Tarski told me the following story. He tried to publish his theorem (stated above) in the Comptes Rendus Acad. Sci. Paris but Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest. And Tarski said that after this misadventure he never tried to publish in the Comptes Rendus.



Found via Wikipedia article on the axiom of choice.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...