Friday, May 31, 2019

radicals - What rational numbers have rational square roots?

All rational numbers have the fraction form $$\frac a b,$$ where a and b are integers($b\neq0$).



My question is: for what $a$ and $b$ does the fraction have rational square root? The simple answer would be when both are perfect squares, but if two perfect squares are multiplied by a common integer $n$, the result may not be two perfect squares. Like:$$\frac49 \to \frac 8 {18}$$



And intuitively, without factoring, $a=8$ and $b=18$ must qualify by some standard to have a rational square root.




Once this is solved, can this be extended to any degree of roots? Like for what $a$ and $b$ does the fraction have rational $n$th root?

trigonometry - How to prove theorem using Euler's formula?

I'm having a great deal of trouble with this proof.



"Prove $\cos θ + \cos 3θ + \cos 5θ + \cdots + \cos [(2n-1)θ] = \dfrac{\sin 2nθ}{2 \sin θ}$.
Prove $\sin θ + \sin 3θ + \sin 5θ + \cdots + \sin [(2n-1)θ] = \dfrac{(\sin nθ)^2}{\sin θ}$."



Relevant Equations




Euler's formula is $e^{iθ} = \cos θ + i \sin θ$.
The geometric progression formula is $S_n = a\left(\dfrac{1-r^n}{1-r}\right)$, where $a$ is the first term and $r$ is the constant that each term is multiplied by to get the next term.



My Attempts to Complete the Proof



Obviously, I need to use the geometric progression formula to prove this. With Euler's formula, the initial term $a$ is $e^{iθ}$. The $r$ term is $e^{2iθ}$. I believe these two values for $a$ and $r$ are correct, as the first term will simply be $\cos θ + i \sin θ$, the second will be $e^{iθ + 2iθ} = \cos 3θ + i \sin 3θ$, and so on.



When I plug this into the formula, I get $S_{2n-1} = e^{iθ}\left(\dfrac{1-e^{2iθ (2n-1)}}{1-e^{2iθ}}\right)$.



This is fine, but I can't for the life of me simplify this to anything meaningful. Did I make a mistake somewhere? Are these the correct values for $a$ and $r$? Is there some special trick that I'm missing?




ANY help would be appreciated. I've been stuck on this for days.



Thanks in advance,
Leo



EDIT: how do I get the last step? I end up with $\dfrac{1}{2} \dfrac{\sin (4n-2)θ}{2 \sin θ}$. It seems that the $2n-1$ is causing problems. If it was just $n$, then this would all work out perfectly. How can I fix this?

real analysis - A discontinuous function such that $f(x + y) = f(x) + f(y)$




Is it possible to construct a function $f \colon \mathbb{R} \to \mathbb{R}$ such that $$f(x + y) = f(x) + f(y)$$ and $f$ is not continuous?


Answer



"Fix a basis for $\mathbb R$ as a $\mathbb Q$-vector space" (which exists under the axiom of choice, but under weaker axioms I have no idea what happens). The condition $f(x+y) = f(x) + f(y)$ is equivalent to $f$ being $\mathbb Q$-linear, so you're asking if there exists a non-trivial discontinuous map of $\mathbb Q$-vector spaces between $\mathbb R$ and itself. If you map the basis elements to other basis elements in a discontinuous way, you will obtain such a map.



Added : A quick way to see that "a discontinuous way of doing it" exists, is that the set of $\mathbb Q$-linear maps that are also $\mathbb R$-linear has cardinality of the reals, where as the set of all $\mathbb Q$-linear maps (or in other words, the number of maps between the basis elements) has cardinality $|\mathbb R|^{|\mathbb R|}$. To understand why, well of course the basis for $\mathbb R$ as a $\mathbb Q$-vector space has cardinality $\le |\mathbb R|$, but if it was countable, then $\mathbb R$ would be countable because all of its elements would be a linear combination of elements in a countable set with countably many possible coefficients. Since the basis for $\mathbb R$ as a $\mathbb Q$-vector space is uncountable, the set of all maps from a basis to another also is. Therefore there must be at least one map between the two bases which generates a discontinuous linear map.



Hope that helps,



Thursday, May 30, 2019

number theory - Evaluating Prime counting function using Riemann's R-function and zeros of Zeta function

Prime counting function can be expressed as follows:
$\pi(x) = \operatorname{R}(x) - \sum_{\rho}\operatorname{R}(x^{\rho}) \tag{1}$
where R(x) is Riemann's R-function and $\rho$-s are zeros of Riemann zeta function. I am able to evaluate R(x) for any x. I can also evaluate the sum over trivial zeros of zeta function, which converges rapidly. The problem I have is evaluating the sum over the non-trivial zeros. It seems to diverge. I always read somewhere that including more non-trivial zeros you get more accurate approximation of $\pi(x)$. But for me the best approximate is not using any zeros $\pi(x) \sim \operatorname{R}(x)$.




When I use no zeros $\pi(x) \sim \operatorname{R}(x)$ I get blue line (see image).



When I use first pair of nontrivial zeros $\pi(x) \sim \operatorname{R}(x)-(\operatorname{R}(x^{\rho_1})+\operatorname{R}(x^{\rho_{-1}}))$ I get yellow line.



When I use first two pairs of nontrivial zeros $\pi(x) \sim \operatorname{R}(x)-(\operatorname{R}(x^{\rho_1})+\operatorname{R}(x^{\rho_{-1}})+\operatorname{R}(x^{\rho_2})+\operatorname{R}(x^{\rho_{-2}}))$ I get green line (same image).
(Red line is $\pi(x)$)



With more zeros it gets worse and worse, but the apposite should be true. What am I doing wrong?

algebra precalculus - Why does $ 1+2+3+cdots+p = {(1⁄2)}cdots(p+1) $

I saw this from Project Euler, problem #1:



If we now also note that $ 1+2+3+\cdots+p = {(1/2)} \cdot p\cdot(p+1) $



What is the intuitive explanation for this? How would I go about deriving the latter from the former? It has the summation express which I am not sure how to deal with, so unfortunately I am not even sure how I would begin to show these are equivalent.

fourier transform - Solution to this integral?



could anyone solve this integral ?



$$\int_0^\infty \frac{e^{-x}\sin(x)\cos(ax)}x~\mathrm dx$$




well i have tried opening up the sin*cos using trigonometric identities but that didn't help so much


Answer



By the sine addition formulas, it is enough to compute
$$ f(m)=\int_{0}^{+\infty}\frac{\sin(mx)}{x}e^{-x}\,dx \tag{1}$$
where $f(0)=0$ and by the dominated convergence theorem
$$ f'(m) = \int_{0}^{+\infty}\cos(mx)e^{-x}\,dx\stackrel{IBP}{=}\frac{1}{1+m^2}\tag{2}$$
implying:
$$ f(m)=\int_{0}^{+\infty}\frac{\sin(mx)}{x}e^{-x}\,dx = \arctan(m)\tag{3} $$
and

$$\begin{eqnarray*} \int_{0}^{+\infty}\frac{\sin(x)\cos(ax)}{x}e^{-x}\,dx &=& \frac{\arctan(1-a)+\arctan(1+a)}{2}\\&=&\color{red}{\frac{1}{2}\,\arctan\frac{2}{a^2}}.\tag{4} \end{eqnarray*}$$


Wednesday, May 29, 2019

real analysis - A nice Combinatorial Identity

I am trying to show that $\forall N\in\mathbb{N}$,




$$\sum\limits_{n=0}^{N}\sum\limits_{k=0}^{N}\frac{\left(-1\right)^{n+k}}{n+k+1}{N\choose n}{N\choose k}{N+n\choose n}{N+k\choose k}=\frac{1}{2N+1}$$



It's backed by numerical verifications, but I can't come up with a proof.



So far, I tried using the generating function of $\left(\frac{1}{2N+1}\right)_{N\in\mathbb{N}}$, which is $\frac{\arctan\left(\sqrt{x}\right)}{\sqrt{x}}$, by showing that the LHS has the same generating function, but this calculation doesn't seem to lead me anywhere...



Any suggestion ?



Edit: the comment of bof (below this question) actually leads to a very simple proof.




Indeed, from bof's comment we have that the LHS is equal to $$\int_{0}^{1}\left(\sum\limits_{k=0}^{N}(-1)^k{N\choose k}{N+k\choose k}x^k\right)^2dx$$



And we recognize here the shifted Legendre Polynomials $\widetilde{P_N}(x)=\displaystyle\sum\limits_{k=0}^{N}(-1)^k{N\choose k}{N+k\choose k}x^k$.



And we know that the shifted Legendre Polynomials form a family of orthogonal polynomials with respect to the inner product $\langle f|g\rangle=\displaystyle\int_{0}^{1}f(x)g(x)dx$, and that their squared norm with respect to this product is $\langle\widetilde{P_n}|\widetilde{P_n}\rangle=\frac{1}{2n+1}$;



so this basically provides the desired result immediately.

real analysis - Partial sums of exponential series



What is known about $f(k)=\sum_{n=0}^{k-1} \frac{k^n}{n!}$ for large $k$?




Obviously it is is a partial sum of the series for $e^k$ -- but this partial sum doesn't reach close to $e^k$ itself because we're cutting off the series right at the largest terms. In the full series, the $(k+i-1)$th term is always at least as large as the $(k-i)$th term for $1\le i\le k$, so $f(k)< e^k/2$. Can we estimate more precisely how much smaller than $e^k$ the function is?



It would look very nice and pleasing if, say, $f(k)\sim e^{k-1}$ for large $k$, but I have no real evidence for that hypothesis.



(Inspired by this question and my answer thereto).


Answer



This appears as problem #96 in Donald J Newman's excellent book: A Problem Seminar.



The problem statement there is:





Show that



$$ 1 + \frac{n}{1!} + \frac{n^2}{2!} + \dots + \frac{n^n}{n!} \sim
\frac{e^n}{2}$$




Where $a_n \sim b_n$ mean $\lim \frac{a_n}{b_n} = 1$.




Thus we can estimate your sum (I have swapped $n$ and $k$) as



$$ 1 + \frac{n}{1!} + \frac{n^2}{2!} + \dots + \frac{n^{n-1}}{(n-1)!} \sim
\frac{e^n}{2}$$



as by Stirling's formula, $\dfrac{n^n}{n!e^n} \to 0$.



The solution in the book proceeds as follows:



The remainder term for a the Taylor Series of a function $f$ is




$$ R_n(x) = \int_{0}^{x} \frac{(x-t)^n}{n!} f^{n+1}(t) \ \text{d}t$$



which for our purposes, comes out as



$$\int_{0}^{n} \frac{(n-t)^n}{n!} e^t \ \text{d}t$$



Making the substitution $n-t = x$ gives us the integral



$$ \int_{0}^{n} \frac{x^n}{n!} e^{-x} \ \text{d}x$$




In an earlier problem (#94), he shows that



$$\int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \sim \sqrt{\frac{\pi n}{2}}$$



which using the substitution $n+x = t$ gives



$$ \int_{n}^{\infty} t^n e^{-t} \ \text{d}t \sim \frac{n^n}{e^n} \sqrt{\frac{\pi n}{2}}$$



Using $\int_{0}^{\infty} x^n e^{-x}\ \text{d}x = n!$ and Stirling's formula now gives the result.




To prove that



$$\int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \sim \sqrt{\frac{\pi n}{2}}$$



He first makes the substitution $x = \sqrt{n} t$ to obtain



$$ \int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \ = \sqrt{n} \int_{0}^{\infty} \left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t} \ \text{d}t$$



Now $$\left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t} \le (1+t)e^{-t}$$ and thus by the dominated convergence theorem,




$$ \lim_{n\to \infty} \frac{1}{\sqrt{n}} \int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x $$



$$= \int_{0}^{\infty} \left(\lim_{n \to \infty}\left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t}\right) \ \text{d}t$$



$$ = \int_{0}^{\infty} e^{-t^2/2} \ \text{d}t = \sqrt{\frac{\pi}{2}} $$


elementary number theory - how can one find the value of the expression, $(1^2+2^2+3^2+cdots+n^2)$











how can one find the value of the expression, $(1^2+2^2+3^2+\cdots+n^2)$



Let,



$T_{2}(n)=1^2+2^2+3^2+\cdots+n^2$



$T_{2}(n)=(1^2+n^2)+(2^2+(n-1)^2)+\cdots$




$T_{2}(n)=((n+1)^2-2(1)(n))+((n+1)^2-2(2)(n-1))+\cdots$


Answer



Hint: $(n+1)^3-n^3=3n^2+3n+1$ and use telescopic sum.


limits - Proving that a quasi-polynomial grows faster than any polynomial


I am trying to prove that given a quasi-polynomial $x^{\ln x}$ grows faster (at infinity) than any regular old polynomial of the form $x^a$ where $a \in \mathbb{R}$.


I reduced this problem to showing that $$ \lim_{x \to \infty} \frac{n^a}{n^{\ln n}} = 0. $$


When trying to compute the limit I cannot find a way to continue:


$$ \lim_{x \to \infty} \frac{x^a}{x^{\ln x}} = \lim_{x \to \infty} \Big(\frac{x}{x^{(\ln x) / a}}\Big)^a = \Big[\lim_{x \to \infty} \frac{x}{x^{(\ln x) / a}}\Big]^a. $$


Where can I go from here? I know how to show this intuitively ($(\ln x)/a \to \infty$ as $x \to \infty$) so the degree in the denominator is bigger than the degree in the numerator, but how would I prove it knowing only limit laws?



Answer



Use exponentiation.


$$ \lim_{x \to \infty} \frac{x^a}{x^{\ln x}} = \lim_{x \to \infty} x^{a-\ln x} =\lim_{x \to \infty}e^{(a-\ln x )(\ln x)} = e^{\lim_{x \to \infty} (a-\ln x)(\ln x)}= e^{ \to -\infty}\to 0 $$


trigonometry - Evaluation of $ sum_{k=0}^n cos ktheta $



I just wanted to evaluate




$$ \sum_{k=0}^n \cos k\theta $$



and I know that it should give



$$ \cos\left(\frac{n\theta}{2}\right)\frac{\sin\left(\frac{(n+1)\theta}{2}\right)}{\sin(\theta / 2)} $$



I tried to start by writing the sum as



$$ 1 + \cos\theta + \cos 2\theta + \cdots + \cos n\theta $$




and expand each cosine by its series representation. But this soon looked not very helpful so I need some clue about how this partial sum is calculated more efficiently ...


Answer



Use:
$$
\sin \frac{\theta}{2} \cdot \cos(k \theta) = \underbrace{\frac{1}{2} \sin\left( \left(k+\frac{1}{2}\right)\theta\right)}_{f_{k+1}} - \underbrace{\frac{1}{2} \sin\left( \left(k-\frac{1}{2}\right)\theta\right)}_{f_{k}}
$$
Thus
$$ \begin{eqnarray}
\sin \frac{\theta}{2} \cdot \sum_{k=1}^n \cos(k \theta) &=& \sum_{k=1}^n \left(f_{k+1} - f_k\right) = f_{n+1}-f_1 \\ &=& \frac{1}{2} \underbrace{\sin\left(\left(n+\frac{1}{2}\right)\theta\right)}_{\sin(\alpha+\beta)}-\frac{1}{2} \underbrace{\sin \frac{\theta}{2}}_{\sin(\alpha-\beta)} = \cos(\alpha) \sin(\beta) \\

&=& \cos\left(\frac{n+1}{2}\theta\right) \sin\left(\frac{n}{2} \theta\right)
\end{eqnarray}
$$
where $\alpha = \frac{n+1}{2} \theta$ and $\beta = \frac{n}{2} \theta$.


elementary number theory - Solving $ax equiv c pmod b$ efficiently when $a,b$ are not coprime




I know how to compute modular multiplicative inverses for co-prime variables $a$ and $b$, but is there an efficient method for computing variable $x$ where $x < b$ and $a$ and $b$ are not co-prime, given variables $a$, $b$ and $c$, as described by the equation below?



$ a x \equiv c \mod b $



For example, given



$ 154x \equiv 14 \mod 182 $, is there an efficient method for computing all the possibilities of $x$, without pure bruteforce?



Please note that I'm not necessarily asking for a direct solution, just a more optimized one.




I do not believe that the Extended Euclidean Algorithm will work here, because $a$ and $b$ are not co-prime.



Edit:
Follow up question, since the first one had a shortcut:



Could the be computed efficiently as well?



$12260x \equiv 24560 \mod 24755$.



$107$ needs to be one of the computed answers.



Answer



Below we compute $\ x\,\equiv\, \dfrac{24560}{12260}\,\pmod{\!24755}\ $ per your edit, $ $ by the method in my first answer.



${\rm mod}\,\ 24755\!:\,\
\dfrac{0}{24755}\overset{\large\frown}\equiv
\dfrac{24560}{12260}\overset{\large\frown}\equiv
\color{#90f}{\dfrac{390}{235}}\overset{\large\frown}\equiv
\color{#0a0}{\dfrac{4280}{40}}\overset{\large\frown}\equiv
\color{#c00}{\dfrac{-535}{-5}}\overset{\large\frown}\equiv\dfrac{0}0$




$ \begin{array}{rl}
\ \ \ \ {\rm i.e.}\ \ \ \ \bmod 24755\!: \ \ \ \ \ [\![1]\!] &\ 24755\, x\,\equiv\ 0\ \\
[\![2]\!] &\ \color{c00}{12260\,x\, \equiv\ 24560\equiv -195}\!\!\!\\
[\![1]\!]\:\!-\:\!2\,[\![2]\!] \rightarrow [\![3]\!] &\ \ \ \ \ \color{#90f}{235\,x\, \equiv\ 390}\ \\
[\![2]\!]\!-\!\color{1orange}52\,[\![3]\!] \rightarrow [\![4]\!] &\ \ \ \ \ \ \, \color{#0a0}{40\,x\, \equiv\ 4280}\ \\
[\![3]\!]\:\!-\:\!\color{}6\,[\![4]\!] \rightarrow [\![5]\!] &\:\! \ \ \ \ \ \color{#c00}{{-}5\,x\, \equiv -535}\ \\
[\![4]\!]\:\!+\:\!\color{1orange}8\,[\![5]\!] \rightarrow [\![6]\!] &\:\!\ \ \ \ \ \ \ \ \color{90f}{0\,x\, \equiv\ 0}\
\end{array}$



$\begin{align}{\rm Therefore}\ \ \ x\equiv {\color{#c00}{\dfrac{535}5}\!\!\!\pmod{24755}}&\equiv \,107\!\!\pmod{\!4951},\ \ {\rm via\ canceling}\ \ 5\\ &\equiv\, 107+4951k\!\!\pmod{\!24755},\ \ 0\le k\le 4\\[0.5em]

&\equiv \{107,\, 5058,\, 10009,\, 14960,\, 19911\}\!\pmod{\!24755}\end{align} $


Tuesday, May 28, 2019

real analysis - Show that: $muleft(bigcup_N bigcap_{n=N}^{infty} A_n right) leq lim inf mu(A_n)$


Let $(X,\mathcal{M},\mu)$ be a measure space. Let $A_1, A_2, \ldots \in \mathcal{M}$.


Then, I want to show that: $$\mu\left(\bigcup_N \bigcap_{n=N}^{\infty} A_n \right) \leq \lim \inf \mu(A_n)$$


There is a solution in lecture notes:


Let $B_N = \bigcap_{n=N}^{\infty} A_n$. $B_N$ form an increasing sequence of elements, then by continuity from below:


$$\mu\left(\bigcup_N \bigcap_{n=N}^{\infty} A_n \right) = \mu\left(\bigcup_N B_N\right) = \lim_{N \to \infty} \mu(B_N) \leq \lim_{N \to \infty} \inf_{n \geq N} \mu(A_n) = \lim \inf \mu(A_n)$$



OK. I do not understand how $\mu\left(\bigcup_N B_N\right) = \lim_{N \to \infty} \mu(B_N)$. And then following inequality and equality. Can somebody give a detailed explanation? Thank you very much.


Answer



As for the equality $$ \mu\left(\bigcup\limits_{N=1}^\infty B_N\right)=\lim\limits_{N\to\infty}\mu(B_N) $$ the proof is the following. Since $\{B_N:N\in\mathbb{N}\}$ is the increasing sequence of sets we have $$ \bigcup\limits_{N=1}^\infty B_N=\coprod\limits_{N=1}^\infty C_N $$ where $C_1=B_1$ and $C_N=B_{n+1}\setminus B_N$ for $N>1$. Moreover for $N>1$ $$ \mu(C_N)=\mu(B_{N+1})-\mu(B_N) $$ hence from $\sigma$-additivity of measure we have $$ \begin{align} \mu\left(\bigcup\limits_{N=1}^\infty B_N\right) =\mu\left(\coprod\limits_{N=1}^\infty C_N\right) &=\mu(C_1)+\sum\limits_{N=2}^\infty\mu(C_N)\\ &=\mu(B_1)+\lim\limits_{M\to\infty}\sum\limits_{N=2}^M\mu(C_N)=\\ &=\mu(B_1)+\lim\limits_{M\to\infty}\sum\limits_{N=2}^M(\mu(B_{N+1})-\mu(B_N))\\ &=\mu(B_1)+\lim\limits_{M\to\infty}(\mu(B_{M+1})-\mu(B_1))\\ &=\lim\limits_{M\to\infty}\mu(B_{M+1})\\ &=\lim\limits_{M\to\infty}\mu(B_M) \end{align} $$ As for the inequality $\mu(B_N)\leq\inf\limits_{n\geq N}\mu(A_n)$ you need to note that $$ B_N=\bigcap\limits_{k=N}^\infty A_k\subset A_n $$ for all $n\geq N$, hence for the same $n$ we have $\mu(B_N)\leq \mu(A_n)$. Therefore $$ \mu(B_N)\leq\inf\limits_{n\geq N}\mu(A_n) $$


sequences and series - Intuition behind $zeta(-1)$ = $frac{-1}{12}$

When I first watched numberphile's 1+2+3+... = $\frac{-1}{12}$ I thought the sum actually equalled $\frac{-1}{12}$ without really understanding it.



Recently I read some wolframalpha pages and watched some videos and now I understand (I think), that $\frac{-1}{12}$ is just an associative value to the sum of all natural numbers when you analytically continue the riemann-zeta function. 3Blue1Brown's video really helped. What I don't really understand is why it gives the value $\frac{-1}{12}$ specifically. The value $\frac{-1}{12}$ seems arbitrary to me and I don't see any connection to the sum of all natural numbers. Is there any intuition behind why you get $\frac{-1}{12}$ when analytically continue the zeta function at $\zeta(-1)$?




EDIT(just to make my question a little clearer):
I'll use an example here. Suppose you somehow didn't know about radians and never associated trig functions like sine to $\pi$ but you knew about maclaurin expansion. By plugging in x=$\pi$ to the series expansion of sine, you would get sine($\pi$) = 0. You might have understood the process in which you get the value 0, the maclaurin expansion, but you wouldn't really know the intuition behind this connection between $\pi$ and trig functions, namely the unit circle, which is essential in almost every branch of number theory.



Back to this question, I understand the analytic continuation of the zeta function and its continued form for $s < 0$ $$\zeta(s)=2^s\pi^{s-1}\sin\frac{\pi s}2\Gamma(1-s)\zeta(1-s)$$ and how when you plug in s = -1, things simplify down to $\frac{-1}{12}$ but I don't see any connection between the fraction and the infinite sum. I'm sure there is a beautiful connection between them, like the one between trig functions and $\pi$, but couldn't find any useful resources on the internet. Hope this clarified things.

linear algebra - connection between determinants




Suppose A is an nxn matrix with real entries and R is its row reduced echelon form. Using information on elementary matrices, explain the connection between det(A) and det(R). Note: you may use the fact that if M,N are two square matrices of the same size then det(MN)= det(M)det(N).



The only thing that is coming to my mind is that the A*R=A^-1, but that doesn't have anything to do with the determinant. Or the sum of the diagonal within the row reduced form is the determinant of A and if any elementary operations happens within A it is also done in R which would change the sum of the diagonal. Can someone point me in the right direction


Answer



Just use the fact that when you row reduce a matrix $A$ you can write $R = E_k E_{k-1} \cdots E_{2}E_{1}A$ where the $E_i$ are the elementary matrices. Then you have; $$Det(R) = Det(E_k) \cdots Det(E_1) \cdot Det(A)$$


Monday, May 27, 2019

Functional equation : $f(xf(y))=yf(x)$

Find all function $f:\mathbb{R}^+\rightarrow\mathbb{R}^+$ satisfying


(i) $f(xf(y))=yf(x)$ for all positive real numbers, $x, y$


(ii) $ f(x) $ is bounded function for domain interval $(1,\infty)$


Please help me check my work below. Thank you.


Let $P(x,y)$ denote $f(xf(y))=yf(x),\;\;\forall x, y \in \mathbb{R^+}$.


Let $y_1, y_2 \in \mathbb{R}^+$ such that $f(y_1)=f(y_2)$.



Consider $P(x,y_1)$ and $P(x,y_2)$ : we have


$f(xf(y_1))=f(xf(y_2))= y_1f(x) = y_2f(x)$ then $y_1=y_2$ so $f$ is injective.


Consider $P(1,1)$ : $f(f(1))=f(1) \rightarrow f(1)=1$


substitute $x=1$ in (i), we have $f(f(y))=y$.


Consider $P(x, x)$ : $f(xf(x))=xf(x)$


Let $xf(x)=z$ so $f(z)=z, \;\exists z \in \mathbb{R}^+$


If $z>1$ then $P(x,f(y)) : f(x)f(y)=f(xy)$ so $f(z)f\left(\frac{1}{z}\right)=f(1)=1$ so $f\left(\frac{1}{z}\right) = \frac{1}{z}$


then $f(z^2)=f(z)f(z)=z^2$ , $f(z^4)=f(z^2)f(z^2)=z^4$ so $f(z^{2^n})= z^{2^n}, \;\forall n \in \mathbb{N}$, contradict (ii)


If $z<1$ then $\frac{1}{z}>1$. Since $f\left(\frac{1}{z}\right) = \frac{1}{z}$ so $f\left(\frac{1}{z^2}\right) = f\left(\frac{1}{z}\right)f\left(\frac{1}{z}\right)=\frac{1}{z^2}$


then $f\left(\frac{1}{z^{2^n}}\right)=\frac{1}{z^{2^n}}$, contradict (ii)



Thus $xf(x)=1$, $f(x)=\frac{1}{x} ,\;\;\forall x \in \mathbb{R^+}$


Answer check :


(i) $f(xf(y))=\frac{y}{x}=yf(x) $


(ii) $f(x)=\frac{1}{x} < f(1)=1 ,\;\;\forall x \in (1,\infty) \;\;\blacksquare$

Sunday, May 26, 2019

improper integrals - Evaluate $intlimits_0^{infty}frac{sin x}{x}, dx$

How can I evaluate the following improper integral:



$$ \int\limits_0^{\infty}\frac{\sin x}{x}\, dx$$



I have tried to evaluate this by integration by parts but failed. Please help

Saturday, May 25, 2019

calculus - Proving $n^epsilon > log(n)$ for sufficiently large $n$



I'd like to show that for all $\epsilon > 0$, there exists an $N$ such that for all $n \geq N$ the following holds: $n^\epsilon > \log(n)$. I'm having trouble justifying this.



My intuition says this should hold. Writing $n = 2^k$ gives an inequality of the form $(2^\epsilon)^k > k$. This should hold for sufficiently large $k$ (and therefore $n$) because $2^\epsilon > 1$, so the function $(2^\epsilon)^k$ grows much faster than $k$. (This is also the approach described in this question.)



However, I was wondering if there is a more "rigorous" way to justify this inequality.



Answer



Provided you have proved L'Hôpital's rule (as you should have in a real analysis course), this is pretty straightforward.



Fix $\epsilon>0$, we have
$$
\lim_{n\rightarrow \infty}\frac{\log n}{n^\epsilon}\stackrel{\text{L'Hôpital's rule}}{=} \lim_{n\rightarrow \infty}\frac{\frac{1}{n}}{\epsilon n^{\epsilon-1}}=0<1
$$
Proving that for sufficiently large $n$, $\log n

To test the convergence of series 1



To test the convergence of the following series:





  1. $\displaystyle \frac{2}{3\cdot4}+\frac{2\cdot4}{3\cdot5\cdot6}+\frac{2\cdot4\cdot6}{3\cdot5\cdot7\cdot8}+...\infty $


  2. $\displaystyle 1+ \frac{1^2\cdot2^2}{1\cdot3\cdot5}+\frac{1^2\cdot2^2\cdot3^2}{1\cdot3\cdot5\cdot7\cdot9}+ ...\infty $


  3. $\displaystyle \frac{4}{18}+\frac{4\cdot12}{18\cdot27}+\frac{4\cdot12\cdot20}{18\cdot27\cdot36} ...\infty $




I cannot figure out the general $u_n$ term for these series(before I do any comparison/ratio test).



Any hints for these?


Answer





I cannot figure out the general $u_n$ term for these series(before I do any comparison/ratio test).




For the first series, one can start from the fact that, for every $n\geqslant1$, $$u_n=\frac{2\cdot4\cdots (2n)}{3\cdot5\cdots(2n+1)}\cdot\frac1{2n+2}=\frac{(2\cdot4\cdots (2n))^2}{2\cdot3\cdot4\cdot5\cdots(2n)\cdot(2n+1)}\cdot\frac1{2n+2},$$ that is, $$u_n=\frac{(2^n\,n!)^2}{(2n+1)!}\cdot\frac1{2n+2}=\frac{4^n\,(n!)^2}{(2n+2)!}.$$ Similar approaches yield the two other cases.


Friday, May 24, 2019

integration - Integrating a cost function over a normal distribution




Let's say you have a cost function $C(x)$ and you want to understand the expected cost if the input follows the normal distribution



$$X \sim \mathcal{N}(\mu,\sigma ^2)\\ $$



If I want to find my expected cost, I would need to integrate over the normal distribution



$$ E(C(x)) = \frac{1}{\sigma \sqrt{2 \pi }}\int_{-\infty}^\infty C(x)e^{-\frac{(x-\mu)^2}{2\sigma^2}}dx $$



I have found ways to approximate the integral of the normal distribution (e.g. taylor series), but nothing on how to integrate another function over it.




I can numerically estimate the integral given an arbitrary cost function, but I was wondering if there is any way to simplify it analytically and see how the expected cost changes as a function of $\mu$ and $\sigma$ and my cost function?



If the arbitrary cost function is too hard, would it be possible to solve for a simplier case such as $C(x) = x^2$? Even in this case I'm struggling to simplify it, although it just might not be possible.


Answer



HINT:



If $C(x)=x^2$, then we can find a closed form for its expected value. Recall that



$$I(a)=\int_{-\infty}^{\infty}e^{-ax^2}dx=\sqrt{\frac{\pi}{a}}$$




and



$$-I'(a)=\int_{-\infty}^{\infty}x^2e^{-ax^2}dx=\frac{\pi^{1/2}}{2a^{3/2}}$$



Then, change variables from $(x-\mu)/\sqrt{2\sigma}$ to $x$ and expand the quadratic inside the integral.



Can you finish from here?


elementary number theory - Finding integer multipliers for linear combination's value $= 0$, using Extended Euclidean Algorithm


I have till now considered EEA as a way to find the linear combination's integer multipliers value to find $\gcd$, as follows:


Given two integers, not both zero, first find the $\gcd$, and then take the remainder at each step, and express the others in terms of that (by substituting), starting from the penultimate step in reverse. The last step has $r_{n+1}=0$.


$$ \begin{align} a = bq_1 + r_1 <--> & \ r_1 = a - bq_1 \\ b= r_1q_2 + r_2 <--> & \ r_2 = b - r_1q_2 \\ r_1 = r_2q_3 + r_3<--> & \ r_3 = r_1-r_2q_3 \\ &... \\ r_{n-2} = r_{n-1}q_{n} + r_{n}<--> & \ r_n = r_{n-2}-r_{n-1}q_{n} \\ r_{n-1} = r_{n}q_{n+1} + r_{n+1}<--> & \ \\ \end{align} $$


It is easy to calculate using EEA, the value of integer multipliers for dividend ($a$), divisor ($b$), for a given value of $\gcd = r_n$. But, to get integer multipliers is not occurring to me, for value of linear combination being $0$, as shown here in Problem $2$.


Also, want to know the logic behind finding integer multipliers for such specific value of linear combination, when the value is a multiple of $\gcd$, including $0$, as any integer divides $0$.


Answer




The way you pose the question is slightly unclear for me, but if I am reading it correctly, I think you want continue the Euclidean Algorithm one step further (instead of stopping at the $\gcd$, stop at $0$) in order to arrive at a last line that looks like $$r_{n-1} = r_{n}q_{n+1} + 0 < -- > 0 = r_{n-1}-r_{n}q_{n+1} $$


real analysis - Show that $ sqrt{p}$ is irrational if $p$ is prime




Show that $ \sqrt{p}$ is irrational if $p$ is prime.




I did the proof before that $ \sqrt{2}$ is irrational by contradiction:




Let's assume that $\sqrt{2}$ is rational, therefore given $m$ and $n$ sharing no common factor:
$$\sqrt{2} = \frac{m}{n}$$
$$2=\frac{m^2}{n^2}$$
$$2n^2=m^2$$



As $2 \mid 2n^2$ , it follows that $2 \mid m^2$, and that $2\mid m$.
Therefore $\exists k \in Z$ s.t. $m=2k$



Using back
$2n^2=m^2$ and substituting $m=2k$ in it, we have:

$$2n^2=(2k)^2$$
$$2n^2=4k^2$$
$$n^2=2k^2$$



Similarly, as $2 \mid 2k^2$, it follows that $2 \mid n^2$, and that $2 \mid n$.
Therefore, $\exists j \in Z$ s.t. $n=2j$



Finally, if $m=2k$ and $n=2j$ $n,j \in Z$, $m$ and $n$ share a common factor $2$. We have a contradiction with the assumption.



It follows that $\sqrt{2}$ cannot be rational but irrational.




This is where I am so far with my understanding. What would the best approach be with the original question?



What would be the approach be here? I was thinking to state that if $p$ is composite then $\exists x,y,m,n \in Z$, all four prime numbers, s.t. $p=xy$.and have then
$$\sqrt{xy}= \frac{m}{n}$$
but I don't see how I could move forward in this direction.



Much appreciated


Answer



Why not just try the same thing? Assume $\sqrt{p} = \frac{m}{n}$ for coprime $m,n$. Then $p = m^2/n^2 \implies m^2 = pn^2$. Hence $p \mid m^2 \implies p \mid m$ by Euclid's Lemma. Then $p \mid n^2 \implies p \mid n$. Contradiction.



elementary number theory - Modular Inverses



I'm doing a question that states to find the inverse of $19 \pmod {141}$.



So far this is what I have:



Since $\gcd(19,141) = 1$, an inverse exists to we can use the Euclidean algorithm to solve for it.



$$

141 = 19\cdot 7 + 8
$$
$$
19 = 8\cdot 2 + 3
$$
$$
8 = 3\cdot 2 + 2
$$
$$
3 = 2\cdot 1 + 1

$$
$$
2 = 2\cdot 1
$$
The textbook says that the answer is 52 but I have no idea how they got the answer and am not sure if I'm on the right track. An explanation would be appreciated! Thanks!


Answer



You're on the right track; as mixedmath says, you now have to "backtrack" your steps like this:
$$\begin{eqnarray}1\;\;\;\;&&=3-2\cdot 1\\
1=&3-(8-3\cdot 2)\cdot 1&=3\cdot 3-8\\
1=&(19-8\cdot 2)\cdot 3-8&=19\cdot 3-8\cdot 7\\

1=&19\cdot 3-(141-19\cdot 7)\cdot 7&=19\cdot52-141\cdot 7\end{eqnarray}$$
This last equation, when taken modulo 141, gives $1\equiv 19\cdot 52\bmod 141$, demonstrating that 52 is the inverse of 19 modulo 141.


Thursday, May 23, 2019

elementary number theory - Do we need to apply the Euclidean Algorithm before applying the Extended Euclidean Algorithm?



As the title says, do we need to apply the Euclidean Algorithm before applying the Extended Euclidean Algorithm?



For example, we have $\gcd(24,17)$, so we can find $x,y$ such that $24x+17y=1$. Applying the Euclidean algorithm:



$$\gcd(24,17)=\gcd(7,17)=\gcd(7,3)=\gcd(1,3)=1$$



Applying the Extended Euclidean algorithm:




\begin{align*} 1 &= 7-2\cdot3 \\ &= 7-2\cdot(17-2\cdot7) \\ &= 5\cdot7-2\cdot17 \\ &=5\cdot(24-17)-2\cdot17 \\ &=5\cdot24-7\cdot17 \end{align*}



Is there a way to do this without first applying the Euclidean algorithm?


Answer



As the name suggests the extended Euclidean algorithm is an extension of the classical algorithm, which means the normal Euclidean algorithm is part of the extended Euclidean algorithm.
normal: Provides gcd
extended provides gcd + the 2 numbers x,y such that gcd= nx + my
whereas n,m are the input numbers



So to answer your original question, yes you don't have to compute the classic Euclidean since you get the gcd at the end.




Look at this simple example from Wikipedia.



I hope this is useful for you.


real analysis - A continuous function $f: [0,1] rightarrow mathbb{R}$ continuous with $f(0) = f(1)$

I came across this question in Stephen Abbott's "Understanding Analysis" 2nd edition.




Let $f: [0,1] \rightarrow \mathbb{R}$ be continuous with $f(0) = f(1)$.



a) Show that there must exist $x,y \in [0,1]$ satisfying $|x-y| = 1/2$ and $f(x)=f(y)$.



b) Show that for each $n \in \mathbb{N}$ there exist $x_n , y_n \in [0,1]$ with $|x_n-y_n|=1/n$ and $f(x_n)=f(y_n)$.



For part a), I thought that I would start by splitting the interval $[0,1]$ into two halves and asserting that if $f(1/2)=c\neq 0$ then there must be some $x \in [0, 1/2]$ and $y \in [1/2,1]$ such that $f(x)=f(y)$ (by the Intermediate Value Theorem) but that did not lead me anywhere.

real analysis - Show that ${{x_n}}$ is convergent and monotone



Question: For $c>0$, consider the quadratic equation
$$
x^2-x-c=0,\qquad x>0.

$$
Define the sequence $\{x_n\}$ recersively by fixing $x_1>0$ and then, if $n$ is an index for which $x_n$ has been defined, defining
$$
x_{n+1}=\sqrt{c+x_n}.
$$
Prove that the sequence $\{x_n\}$ converges monotonically to the solution of the above equation.



My uncompleted solution: General speaking, the sequence ${\{x_{n+1}}\}$ is a subsequence of ${\{x_n}\}$. Hence, if $\lim_{n \to \infty} x_n = x_s$, then $\lim_{n \to \infty} x_{n+1} = x_s$ as well. So, from sum and productproperties of convergent sequences,
(finally) it follows that $x_s=\sqrt{c+x_s}$ which is equivalent to the mentioned quadratic equation for $x>0$. To show that ${\{x_n}\}$ is monotone, it is enough to show that it is bounded, since ${\{x_n}\}$ is convergent. But I don't know how to show that ${\{x_n}\}$ is bounded.




Thank you in advance for a clear guidance/solution.



EDIT : (by considering first two comments to this question, so) The question is, show that ${\{x_n}\}$ is $1-$ convergent, and, $2-$ monotone.


Answer



Let $f(x)=\sqrt{c+x}$, $x>0$. There is a unique fixed point $x^*$ such that $f(x^*)=x^*$. $x^*$ is in fact the positive solution of $x^2-x-c=0$. The sequence $x_n$ is defined by $x_{n+1}=f(x_n)$.



If $0

If $x^*x>x^*$. From this it is easy to show that if $x_1>x^*$ then $x_n$ is decreasing and bounded below by $x^*$. This implies that $x_n$ converges and the limit is $x^*$.




If $x_1=x^*$, then $x_n=x^*$ for all $n$.


sequences and series - Why $ sum_{k=0}^{infty} q^k $ sum is $ frac{1}{1-q}$ when $|q| < 1$


Why is the infinite sum of $ \sum_{k=0}^{\infty} q^k = \frac{1}{1-q}$ when $|q| < 1$


I don't understand how the $\frac{1}{1-q}$ got calculated. I am not a math expert so I am looking for an easy to understand explanation.


Answer




By definition you have $$ \sum_{k=0}^{+\infty}q^k=\lim_{n\to+\infty}\underbrace{\sum_{k=0}^{n}q^k}_{=:S_n} $$ Notice now that $(1-q)S_n=(1-q)(1+q+q^2+\dots+q^n)=1-q^{n+1}$; so dividing both sides by $1-q$ (in order to do this, you must be careful only to have $1-q\neq0$, i.e. $q\neq1$) we immediately get $$ S_n=\frac{1-q^{n+1}}{1-q}. $$ If you now pass to the limit in the above expression, when $|q|<1$, it's clear that $$ S_n\stackrel{n\to+\infty}{\longrightarrow}\frac1{1-q}\;\;, $$ as requested. To get this last result, you should be confident with limits, and know that $\lim_{n\to+\infty}q^n=0$ when $|q|<1$.


combinatorics - Proving an identity involving binomial coefficients and fractions

I've been trying to prove the following formula (for $n > 1$ natural, $a, b$ non-zero reals), but I don't know where to start.




$$\sum_{j=1}^n \binom{n-1}{j-1} \left( \frac{a-j+1}{b-n+1} \right) \left( \frac{a}{b} \right)^{j-1} \left( \frac{b-a}{b} \right)^{n-j} = \frac{a}{b}$$



Wolfram says it's right, but so far I've been unable to give a proof. I wonder if there's a combinatorial proof to it. Any help, hint or reference will be much appreciated.

Wednesday, May 22, 2019

number theory - How to prove that the numerator is divided by 101 $1+frac{1}{2}+ldots+frac{1}{100}$




How to prove that: $1+\frac{1}{2}+\ldots+\frac{1}{100} = \frac{p}{q},
\gcd(p,q) = 1 \Rightarrow p \vdots 101$




I tried to allocate a numerator, but nothing happened, I tried to calculate the amount manually, but this also did not lead to success, I will be happy with any help.


Answer



For $1\leq n\leq 100$ let $1\leq r(n)\leq 100$ where $\frac {100!}{n}\equiv r(n)\pmod {101}.$




We have $1\leq n

Another way is to consider this in the field $F=\Bbb Z_{101}.$ Let $S=\sum_{n=1}^{100}1/n.$ Now $F$ does not have characteristic $2$, so in $F$ we have $S=\sum_{x\in F\backslash \{0\}}(x^{-1})=\sum_{y\in F\backslash \{0\}}(y)=0.$ The implication is that in $\Bbb Z$ we have $S=A/100!$ for some $A\in \Bbb Z,$ and if $101$ does not divide $A$ then in $\Bbb Z_{101}$ we would have $S\ne 0.$


combinatorics - How to simplify this triple summation containing binomial coefficients?


$$
\large\sum_{i=0}^{n} \sum_{j=i}^{n} \sum_{k=j}^{n}

\binom{i+m-1}{m-1}\binom{j+m-1}{m-1}\binom{k+m-1}{m-1}
$$




How to solve it when this involve more than thousand summation ?

sequences and series - Complex Analysis Solution to the Basel Problem ($sum_{k=1}^infty frac{1}{k^2}$)




Most of us are aware of the famous "Basel Problem":




$$\sum_{k=1}^\infty \frac{1}{k^2} = \frac{\pi^2}{6}$$



I remember reading an elegant proof for this using complex numbers to help find the value of the sum. I tried finding it again to no avail. Does anyone know a complex number proof for the solution of the Basel Problem?


Answer



The most straightforward way I know is to consider the contour integral
$$
\frac{1}{2\pi i}\oint\pi\cot(\pi z)\frac{1}{z^2}\mathrm{d}z\tag{1}
$$
around circles whose radii are $\frac12$ off an integer.




The function $\pi\cot(\pi z)$ has residue $1$ at every integer. Thus the integral in $(1)$ equals the residue of $\pi\cot(\pi z)\dfrac{1}{z^2}$ at $z=0$ plus twice the sum in question (one for the positive integers and one for the negative integers).



The integral in $(1)$ tends to $\color{blue}{0}$ as the radius goes to $\infty$.



The Laurent expansion of $\pi\cot(\pi z)\dfrac{1}{z^2}$ at $z=0$ is
$$
\frac{1}{z^3}-\frac{\pi^2}{3z}-\frac{\pi^4z}{45}-\frac{2\pi^6z^3}{945}-\dots\tag{2}
$$
The only term that contributes to the residue at $z=0$ is the $\dfrac1z$ term. That is, the residue at $z=0$ of $(2)$ is $\color{green}{-\frac{\pi^2}{3}}$. Thus, the sum in question must be $\color{red}{\frac{\pi^2}{6}}$ (so that $\color{green}{-\frac{\pi^2}{3}}+2\cdot\color{red}{\frac{\pi^2}{6}}=\color{blue}{0}$).


linear algebra - Determinant of complex matrix with almost constant lines



Let $0\neq c\in\mathbb{C}$. Take the matrix
$$A_C=\begin{pmatrix}
n&c&\dots&c&c \\
c&n&c &\dots & c\\

c &c & n &c &\dots\\
\vdots &\vdots&\vdots&\ddots & \vdots \\
c & \dots &\dots &c& n
\end{pmatrix}\in \mathbb{C}^{n \times n}.$$



I want to show that $A_C$ invertible. It seems like one can compute the determinant explicitly but I messed with it a bit.



How can I prove $\det A_C\neq 0$?



EDIT: We assume $c\neq n$.



Answer



This is a circulant matrix and as such has normalized eigenvectors
$$v_j=\frac{1}{\sqrt{n}}(1,\omega_j,\omega_j^2,\ldots,\omega_j^{n-1})$$
where
$$\omega_j=exp\left(\frac{2\pi i j}{n}\right)$$
The eigenvalues are
$$\lambda_j=n+c\omega_j+c\omega_j^2+\cdots+c\omega_j^{n-1}$$
Taking the product of the eigen values gives the determinant. Now since the $\omega_j$ are roots of unity we have
$$\sum_{k=1}^{n-1}\omega_j^k=-1$$
which gives

$\lambda_j=n-c$. Since we have assumed that $n\neq c$ no eigenvalue is equal to zero. Hence the determinant is non-zero.


Tuesday, May 21, 2019

real analysis - Variant of Cauchy Functional Equation



Consider the equation $$f(kx-f(x))=x=kf(x)-f(f(x))$$ for montonic $f$.


What can we say about the solutions to this equation. Comparing with Cauchy equation $f(x+y)=f(x)+f(y)$, I think the solution must be somewhat close to being linear. Any hints. Thanks beforehand.


Answer



$f(kx-f(x))=x \tag{1} $
$kf(x)-f(f(x)) =x \tag{2}$



From $(1) \implies $ $f(x) $ is surjective.




$(2) \implies $ Let $f(a)=f(b)\implies a=kf(a)-f(f(a))=kf(b)-f(f(b)) = b \implies $



$f(x)$ is injective.




This means $f(x)$ is bijective. So $f^{-1}(x)$ exists.



$(1)\implies f(kf^{-1}(x)-f(f^{-1}(x)))=f^{-1}(x) \implies f^{-1}(f(kf^{-1}(x)-x))=f^{-1}(f^{-1}(x)) \implies x =kf^{-1}(x) -f^{-1}(f^{-1}(x)) $
$(2) \implies kx-f^{-1}(x) = f(x) \implies f^{-1}( kx-f^{-1}(x)) = x $ $\implies$



If $f(x)$ is a solution to $(1)$ and $(2)$ then so is $f^{-1}(x)=kx-f(x)$



Let : $ g(x)=-f(-x) $.
$(1)\implies -f(-kx-f(-x))=x \implies g(kx+f(-x))=x \implies g(kx-g(x))=x $
$(2)\implies -kf(-x)+f(f(-x)) =x \implies kg(x)+f(-g(x))=x\implies kg(x)-g(g(x))=x $



$\implies $ If $f(x)$ is a solution to $(1)$ and $(2)$ then so is $ g(x):=-f(-x)$.





Fixed points :
From $ (2) $ we see that if there exists an $ a $ for which $f(a)=a $ then: $ kf(a)-f(f(a)) =a \implies ka =2a \implies a=0 \lor k=2 $
$f(ka-f(a))=a \implies ka-f(a) =a \implies ka =2a \implies a=0 \lor k=2 $

Also we see that if there exists an $ a $ for which $f(a)=-a $ then: $ kf(a)-f(f(a)) =a \implies -ka =2a \implies a=0 \lor k=-2 $
$f(ka-f(a))=a \implies -ka-a =a \implies -ka =2a \implies a=0 \lor k=-2 $



From the above we see that when $k=2$ , then $f(x)=x$ is a solution. And when $k=-2$ , then $f(x)=-x$ is a solution.



General solution :


(Note: I screwed up a couple of times before here. Apologies to everyone who read it, if anyone did..I do think this must be the correct argument.)

We transform $f(x)$ into a new function $g(x)$ like this : $g(x)+\frac{kx}{2}=f(x)$
Which gives :
$ g(\frac{kx}{2}-g(x))-\frac{kg(x)}{2} =(1-\frac{k^2}{4})x \tag{1a}$
$-g(\frac{kx}{2}+g(x))+\frac{kg(x)}{2} =(1-\frac{k^2}{4})x \tag{2a}$
There are two cases to consider :


Case 1 when $k^2 \neq 4 $ :

Let $g(a)=0 \implies (1a) \implies g(\frac{ka}{2}) =(1-\frac{k^2}{4})a $
$(2a) \implies -g(\frac{ka}{2} ) =(1-\frac{k^2}{4})a $



$ \implies a=0 \implies g(0)=f(0)=0 \enspace ( k^2 \neq 4 ) $




From bijectivity and monotonicity I think we can conclude that $f(x)$ and $g(x)$ must be continuous, (and maybe even differentiable).
Here I'll assume $g(x)$ can be written as a Taylor series around $x=0$ :
We calculate the first and second derivative :


$ g'(\frac{kx}{2}-g(x))(\frac{k}{2}-g'(x))-\frac{kg'(x)}{2} =(1-\frac{k^2}{4}) $
$-g'(\frac{kx}{2}+g(x))(\frac{k}{2}+g'(x))+\frac{kg'(x)}{2} =(1-\frac{k^2}{4}) $


$ g'(0)(\frac{k}{2}-g'(0))-\frac{kg'(0)}{2} =(1-\frac{k^2}{4}) $
$-g'(0)(\frac{k}{2}+g'(0))+\frac{kg'(0)}{2} =(1-\frac{k^2}{4}) $



$ g'(0) = \pm \sqrt{\frac{k^2}{4}-1} $




$ g''(\frac{kx}{2}-g(x))(\frac{k}{2}-g'(x))^2+g'(\frac{kx}{2}-g(x))(-g''(x))-\frac{kg''(x)}{2} =0 $
$-g''(\frac{kx}{2}+g(x))(\frac{k}{2}+g'(x))^2 -g'(\frac{kx}{2}+g(x))(g''(x))+\frac{kg''(x)}{2} =0 $

$ g''(0)(\frac{k}{2}-g'(0))^2+g'(0)(-g''(0))-\frac{kg''(0)}{2} =0 $
$-g''(0)(\frac{k}{2}+g'(0))^2 -g'(0)(g''(0))+\frac{kg''(0)}{2} =0 $

$ g'(0)g''(0)( k + 1) =0 \implies g''(0)=0 \enspace \lor \enspace k=-1 $
$-g''(0)(\frac{k^2}{4}+(k+1)g'(0)+g'(0)^2 -\frac{k}{2}) =0 \implies \text{(with $k=-1$ and $g'(0)^2=\frac{k^2}{4}-1$ )} \implies -g''(0)(\frac{1}{4} +\frac{1}{4}-1 -\frac{1}{2}) =0 \implies g''(0)=0 $



$\implies g''(0) =0 $ . Also the higher derivatives in $x=0$ are $0$ (I think).






So for $k^2 \neq 4 $ we have : $g(x)=\pm \sqrt{\frac{k^2}{4}-1} \cdot x$. Or : $f(x)=( \frac{k}{2} \pm \sqrt{\frac{k^2}{4}-1} ) \cdot x $



Fill this in in the original equations to check :


$g(x)= \sqrt{\frac{k^2}{4}-1} \cdot x$
$ \sqrt{\frac{k^2}{4}-1} (\frac{k}{2}-\sqrt{\frac{k^2}{4}-1} )x-\frac{k}{2}\sqrt{\frac{k^2}{4}-1} x =(1-\frac{k^2}{4})x $
$-\sqrt{\frac{k^2}{4}-1} (\frac{k}{2}+\sqrt{\frac{k^2}{4}-1} )x+\frac{k}{2}\sqrt{\frac{k^2}{4}-1} x =(1-\frac{k^2}{4})x $


$g(x)= -\sqrt{\frac{k^2}{4}-1} \cdot x$
$ -\sqrt{\frac{k^2}{4}-1}(\frac{k}{2}+\sqrt{\frac{k^2}{4}-1} )x+\frac{k}{2}\sqrt{\frac{k^2}{4}-1} x =(1-\frac{k^2}{4})x $
$+\sqrt{\frac{k^2}{4}-1} (\frac{k}{2}-\sqrt{\frac{k^2}{4}-1} )x-\frac{k}{2}\sqrt{\frac{k^2}{4}-1} x =(1-\frac{k^2}{4})x $

$\square$


Case 2 when $k^2 = 4 $ :


$ g(\frac{kx}{2}-g(x))-\frac{kg(x)}{2} =0 \tag{1b}$
$-g(\frac{kx}{2}+g(x))+\frac{kg(x)}{2} =0 \tag{2b}$


We see that $g(x)$ must be constant (see here for example : Functions $f$ satisfying $ f\circ f(x)=2f(x)-x,\forall x\in\mathbb{R}$.) and fill in as a general solution :




For $k = 2 $ we have : $g(x)= c $. Or : $f(x)= c + x $.





For $k = -2 $ we have : $g(x)= 0 $. Or : $f(x)= - x $.



$\square$


Below some solutions to modified versions :

Below some things that can be proved about the more general version of $(1)$ and $(2)$ : equation $(3)$ :

More general case without '$=x=$' in between :

$$ f(kx-f(x))=kf(x)-f(f(x)) \tag{3} $$




$f(x)=0$ and $f(x)=kx$ and $f(x)=\frac{kx}{2}$ are all solutions to $(3)$ .



$f(x)=0$ is trivial.
Let: $f(x)=kx \implies \\ f(kx-f(x))=kf(x)-f(f(x)) \implies f(kx-kx)=k^2x-f(kx) \implies 0=0 $
Let: $f(x)=\frac{kx}{2} \implies f(kx-\frac{kx}{2})=k\frac{kx}{2}-f(\frac{kx}{2}) \implies \frac{k^2x}{2}= \frac{k^2x}{2} \enspace \square $



There are many more solutions, including the ones above for $(1)$ and $(2)$.


Below a solution to a modified version of $(1)$ and $(2)$ : equation $(4)$ and $(5)$:

Modified case with '$=\frac{k^2x}{4}=$' in between :


$f(kx-f(x))=\frac{k^2x}{4} \tag{4} $
$kf(x)-f(f(x)) =\frac{k^2x}{4} \tag{5}$



$f(x)=\frac{kx}{2}$ is a solution to $(4)$ and $(5)$.




Let: $f(x)=\frac{kx}{2} \implies f(kx-\frac{kx}{2})=\frac{k^2x}{4} \implies \frac{k^2x}{4}=\frac{k^2x}{4} $ and :
$ k\frac{kx}{2}-f(\frac{kx}{2}) =\frac{k^2x}{4} \implies \frac{k^2x}{2}- \frac{k^2x}{4} =\frac{k^2x}{4} $ $ \enspace \square $



calculus - Finding $lim_{tto 0}frac{|t-2|}{t}$ and $lim_{tto infty}frac{|t-2|}{t}$


Find $$\lim_{t\to 0}\frac{|t-2|}{t}$$ and $$\lim_{t\to\infty}\frac{|t-2|}{t}$$


Usually I would simply the top and bottom but I'm not sure what to do for absolute values.


Any help would be appreciated. Thanks!


Answer



HINT : Note that $$|t-2|=\begin{cases}t-2&\text{if $t-2\ge 0$}\\-(t-2)&\text{if $t-2\lt 0$}\end{cases}$$ Hence, we have $$\lim_{t\to 0}\frac{|t-2|}{t}=\lim_{t\to 0}\frac{-(t-2)}{t}$$ and


$$\lim_{t\to \infty}\frac{|t-2|}{t}=\lim_{t\to\infty}\frac{t-2}{t}.$$


Monday, May 20, 2019

calculus - A limit problem $limlimits_{x to 0}frac{xsin(sin x) - sin^{2}x}{x^{6}}$



This is a problem from "A Course of Pure Mathematics" by G H Hardy. Find the limit $$\lim_{x \to 0}\frac{x\sin(\sin x) - \sin^{2}x}{x^{6}}$$ I had solved it long back (solution presented in my blog here) but I had to use the L'Hospital's Rule (another alternative is Taylor's series). This problem is given in an introductory chapter on limits and the concept of Taylor series or L'Hospital's rule is provided in a later chapter in the same book. So I am damn sure that there is a mechanism to evaluate this limit by simpler methods involving basic algebraic and trigonometric manipulations and use of limit $$\lim_{x \to 0}\frac{\sin x}{x} = 1$$ but I have not been able to find such a solution till now. If someone has any ideas in this direction please help me out.




PS: The answer is $1/18$ and can be easily verified by a calculator by putting $x = 0.01$


Answer



Preliminary Results:



We will use
$$
\begin{align}
\frac{\color{#C00000}{\sin(2x)-2\sin(x)}}{\color{#00A000}{\tan(2x)-2\tan(x)}}
&=\underbrace{\color{#C00000}{2\sin(x)(\cos(x)-1)}\vphantom{\frac{\tan^2(x)}{\tan^2(x)}}}\underbrace{\frac{\color{#00A000}{1-\tan^2(x)}}{\color{#00A000}{2\tan^3(x)}}}\\
&=\hphantom{\sin}\frac{-2\sin^3(x)}{\cos(x)+1}\hphantom{\sin}\frac{\cos(x)\cos(2x)}{2\sin^3(x)}\\

&=-\frac{\cos(x)\cos(2x)}{\cos(x)+1}\tag{1}
\end{align}
$$
Therefore,
$$
\lim_{x\to0}\frac{\sin(x)-2\sin(x/2)}{\tan(x)-2\tan(x/2)}=-\frac12\tag{2}
$$
Thus, given an $\epsilon\gt0$, we can find a $\delta\gt0$ so that if $|x|\le\delta$
$$
\left|\,\frac{\sin(x)-2\sin(x/2)}{\tan(x)-2\tan(x/2)}+\frac12\,\right|\le\epsilon\tag{3}

$$
Because $\,\displaystyle\lim_{x\to0}\frac{\sin(x)}{x}=\lim_{x\to0}\frac{\tan(x)}{x}=1$, we have
$$
\sin(x)-x=\sum_{k=0}^\infty2^k\sin(x/2^k)-2^{k+1}\sin(x/2^{k+1})\tag{4}
$$
and
$$
\tan(x)-x=\sum_{k=0}^\infty2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1})\tag{5}
$$
By $(3)$ each term of $(4)$ is between $-\frac12-\epsilon$ and $-\frac12+\epsilon$ of the corresponding term of $(5)$. Therefore,

$$
\left|\,\frac{\sin(x)-x}{\tan(x)-x}+\frac12\,\right|\le\epsilon\tag{6}
$$
Thus,
$$
\lim_{x\to0}\,\frac{\sin(x)-x}{\tan(x)-x}=-\frac12\tag{7}
$$
Furthermore,
$$
\begin{align}

\frac{\tan(x)-\sin(x)}{x^3}
&=\tan(x)(1-\cos(x))\frac1{x^3}\\
&=\frac{\sin(x)}{\cos(x)}\frac{\sin^2(x)}{1+\cos(x)}\frac1{x^3}\\
&=\frac1{\cos(x)(1+\cos(x))}\left(\frac{\sin(x)}{x}\right)^3\tag{8}
\end{align}
$$
Therefore,
$$
\lim_{x\to0}\frac{\tan(x)-\sin(x)}{x^3}=\frac12\tag{9}
$$

Combining $(7)$ and $(9)$ yield
$$
\lim_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16\tag{10}
$$
Additionally,
$$
\frac{\sin(A)-\sin(B)}{\sin(A-B)}
=\frac{\cos\left(\frac{A+B}{2}\right)}{\cos\left(\frac{A-B}{2}\right)}
=1-\frac{2\sin\left(\frac{A}{2}\right)\sin\left(\frac{B}{2}\right)}{\cos\left(\frac{A-B}{2}\right)}\tag{11}
$$







Finishing Up:
$$
\begin{align}
&x\sin(\sin(x))-\sin^2(x)\\
&=[\color{#C00000}{(x-\sin(x))+\sin(x)}][\color{#00A000}{(\sin(\sin(x))-\sin(x))+\sin(x)}]-\sin^2(x)\\
&=\color{#C00000}{(x-\sin(x))}\color{#00A000}{(\sin(\sin(x))-\sin(x))}\\
&+\color{#C00000}{(x-\sin(x))}\color{#00A000}{\sin(x)}\\

&+\color{#C00000}{\sin(x)}\color{#00A000}{(\sin(\sin(x))-\sin(x))}\\
&=(x-\sin(x))(\sin(\sin(x))-\sin(x))+\sin(x)(x-2\sin(x)+\sin(\sin(x)))\tag{12}
\end{align}
$$
Using $(10)$, we get that
$$
\begin{align}
&\lim_{x\to0}\frac{(x-\sin(x))(\sin(\sin(x))-\sin(x))}{x^6}\\
&=\lim_{x\to0}\frac{x-\sin(x)}{x^3}\lim_{x\to0}\frac{\sin(\sin(x))-\sin(x)}{\sin^3(x)}\lim_{x\to0}\left(\frac{\sin(x)}{x}\right)^3\\
&=\frac16\cdot\frac{-1}6\cdot1\\

&=-\frac1{36}\tag{13}
\end{align}
$$
and with $(10)$ and $(11)$, we have
$$
\begin{align}
&\lim_{x\to0}\frac{\sin(x)(x-2\sin(x)+\sin(\sin(x)))}{x^6}\\
&=\lim_{x\to0}\frac{\sin(x)}{x}\lim_{x\to0}\frac{x-2\sin(x)+\sin(\sin(x))}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-(\sin(x)-\sin(\sin(x))}{x^5}\\
&=\lim_{x\to0}\frac{(x-\sin(x))-\sin(x-\sin(x))\left(1-\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{\cos\left(\frac{x-\sin(x)}{2}\right)}\right)}{x^5}\\

&=\lim_{x\to0}\frac{(x-\sin(x))-\sin(x-\sin(x))+\sin(x-\sin(x))\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{\cos\left(\frac{x-\sin(x)}{2}\right)}}{x^5}\\
&=\lim_{x\to0}\frac{\sin(x-\sin(x))}{x^3}\frac{2\sin\left(\frac{x}{2}\right)\sin\left(\frac{\sin(x)}{2}\right)}{x^2}\\[6pt]
&=\frac16\cdot\frac12\\[6pt]
&=\frac1{12}\tag{14}
\end{align}
$$
Adding $(13)$ and $(14)$ gives
$$
\color{#C00000}{\lim_{x\to0}\frac{x\sin(\sin(x))-\sin^2(x)}{x^6}=\frac1{18}}\tag{15}
$$







Added Explanation for the Derivation of $(6)$



The explanation below works for $x\gt0$ and $x\lt0$. Just reverse the red inequalities.



Assume that $x\color{#C00000}{\gt}0$ and $|x|\lt\pi/2$. Then $\tan(x)-2\tan(x/2)\color{#C00000}{\gt}0$.

$(3)$ is equivalent to
$$

\begin{align}
&(-1/2-\epsilon)(\tan(x)-2\tan(x/2))\\[4pt]
\color{#C00000}{\le}&\sin(x)-2\sin(x/2)\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(\tan(x)-2\tan(x/2))\tag{16}
\end{align}
$$
for all $|x|\lt\delta$. Thus, for $k\ge0$,
$$
\begin{align}
&(-1/2-\epsilon)(2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1}))\\[4pt]

\color{#C00000}{\le}&2^k\sin(x/2^k)-2^{k+1}\sin(x/2^{k+1})\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(2^k\tan(x/2^k)-2^{k+1}\tan(x/2^{k+1}))\tag{17}
\end{align}
$$
Summing $(17)$ from $k=0$ to $\infty$ yields
$$
\begin{align}
&(-1/2-\epsilon)\left(\tan(x)-\lim_{k\to\infty}2^k\tan(x/2^k)\right)\\[4pt]
\color{#C00000}{\le}&\sin(x)-\lim_{k\to\infty}2^k\sin(x/2^k)\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)\left(\tan(x)-\lim_{k\to\infty}2^k\tan(x/2^k)\right)\tag{18}

\end{align}
$$
Since $\lim\limits_{k\to\infty}2^k\tan(x/2^k)=\lim\limits_{k\to\infty}2^k\sin(x/2^k)=x$, $(18)$ says
$$
\begin{align}
&(-1/2-\epsilon)(\tan(x)-x)\\[4pt]
\color{#C00000}{\le}&\sin(x)-x\\[4pt]
\color{#C00000}{\le}&(-1/2+\epsilon)(\tan(x)-x))\tag{19}
\end{align}
$$

which, since $\epsilon$ is arbitrary is equivalent to $(6)$.


calculus - Evaluating $int_0^infty dn , frac{x^n}{(3n+1)(3n+2)}$


I'm trying to prove a particular series is convergent, and I would like to use the Cauchy integral test for fun, even though it's not the most convenient. I need to evaluate,


$$\int_0^\infty dn \, \frac{x^n}{(3n+1)(3n+2)}$$


for $x\in\mathbb{R}$. Using Mathematica I have found the integral to be,


$$\frac{1}{3x^{2/3}} \left[ x^{1/3}\Gamma(0,-\log(x)/x)-\Gamma(0,-2\log(x)/x)\right]$$


providing $x<1$. So I just need to express the integral in terms of incomplete gamma functions, but I haven't found a substitution. Can someone offer a hint (and not a complete solution)?


Answer



Hint. Recall that the incomplete gamma function may be defined as $$ \Gamma(s,a)=\int_a^{\infty} t^{s-1}e^{-t}dt,\quad a>0, s \in \mathbb{R}. \tag1 $$ Observe that, for $0

elementary number theory - Using Extended Euclidean Algorithm for $85$ and $45$



Apply the Extended Euclidean Algorithm of back-substitution to find the value of $\gcd(85, 45)$ and to express $\gcd(85, 45)$ in the form $85x + 45y$ for a pair of integers $x$ and $y$.



I have no idea what is the difference between the regular EEA and the back-substitution EEA. The only thing that I have been taught is constructing the EEA table, for some values a, b. Can anyone help me explain this? Thanks a ton!


Answer




You’re probably intended to do the substitutions explicitly. You have


$$\begin{align*} 85&=1\cdot45+40\\ 45&=1\cdot40+5\\ 40&=8\cdot5\;. \end{align*}$$


Now work backwards:


$$\begin{align*} 5&=1\cdot45-1\cdot40\\ &=1\cdot45-1\cdot(1\cdot85-1\cdot45)\\ &=(-1)\cdot85+2\cdot45\;. \end{align*}$$


The tabular method is just a shortcut for this explicit back-substitution.


combinatorics - Combinatorial identity's algebraic proof without induction.


How would you prove this combinatorial idenetity algebraically without induction?


$$\sum_{k=0}^n { x+k \choose k} = { x+n+1\choose n }$$


Thanks.


Answer




Here is an algebraic approach. In order to do so it's convenient to use the coefficient of operator $[z^k]$ to denote the coefficient of $z^k$ in a series. This way we can write e.g. \begin{align*} \binom{n}{k}=[z^k](1+z)^n \end{align*}



We obtain


\begin{align*} \sum_{k=0}^{n}\binom{x+k}{k}&=\sum_{k=0}^{n}\binom{-x-1}{k}(-1)^k\tag{1}\\ &=\sum_{k=0}^n[z^k](1+z)^{-x-1}(-1)^k \tag{2}\\ &=[z^0]\frac{1}{(1+z)^{x+1}}\sum_{k=0}^n\left(-\frac{1}{ z }\right)^k\tag{3}\\ &=[z^0]\frac{1}{(1+z)^{x+1}}\cdot \frac{1-\left(-\frac{1}{z}\right)^{n+1}}{1-\left(-\frac{1}{z}\right)}\tag{4}\\ &=[z^n]\frac{z^{n+1}+(-1)^n}{(1+z)^{x+2}}\tag{5}\\ &=(-1)^n[z^n]\sum_{k=0}^\infty\binom{-x-2}{k}z^k\tag{6}\\ &=(-1)^n\binom{-x-2}{n}\tag{7}\\ &=\binom{x+n+1}{n}\tag{8} \end{align*} and the claim follows.



Comment:



  • In (1) we use the binomial identity $\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$.





  • In (2) we apply the coefficient of operator.




  • In (3) we do some rearrangements by using the linearity of the coefficient of operator and we also use the rule \begin{align*} [z^{p-q}]A(z)=[z^p]z^{q}A(z) \end{align*}




  • In (4) apply the formula for the finite geometric series.




  • In (5) we do some simplifications and use again the rule stated in comment (3).





  • In (6) we use the geometric series expansion of $\frac{1}{(1+z)^{x+2}}$. Note that we can ignore the summand $z^{n+1}$ in the numerator since it has no contribution to the coefficient of $z^n$.




  • In (7) we select the coefficient of $z^n$.




  • In (8) we use the rule stated in comment (1) again.




Sunday, May 19, 2019

calculus - Identification of a curious function

During computation of some Shapley values (details below), I encountered the following function: $$ f\left(\sum_{k \geq 0} 2^{-p_k}\right) = \sum_{k \geq 0} \frac{1}{(p_k+1)\binom{p_k}{k}}, $$ where $p_0 > 0$ and $p_{k+1} > p_k$ for all $k$. In other words, the input to $f$ is the binary expansion of a real number in the range $[0,1]$, and the $p_k$ correspond to the positions of $1$s in the binary expansion.


For example, $f(2^{-t}) = 1/(t+1)$, so $f(1/2) = 1/2$, $f(1/4) = 1/3$ and so on. More complicated examples are $f(5/8) = f(2^{-1} + 2^{-3}) = 1/2 + 1/(4\cdot 3) = 7/12$ and $$ f(2/3) = f\left(\sum_{k \geq 0}2^{-(2k+1)}\right) = \sum_{k \geq 0} \frac{1}{(2k+2)\binom{2k+1}{k}} = \frac{\pi}{\sqrt{27}}.$$


The function $f$ is a continuous increasing function satisfying $f(0) = 0$, $f(1) = 1$, and $f(1-t) = 1-f(t)$ for $t \in [0,1]$. It has vertical asymptotes at dyadic points.


Here is a plot of $f$:plot of f



Is the function $f$ known?




Here is where $f$ came from. Let $n \geq 1$ be an integer and let $t \in [0,1]$. For a permutation $\pi$ of the numbers $\{ 2^{-m} : 0 \leq m \leq n-1 \}$ satisfying $\pi^{-1}(1) = i$, we say that $\pi$ is pivotal if $\sum_{j

For example, take $n = 4$. The permutation $1/8,1/2,1,1/4$ is pivotal for $t \in (5/8,1]$. For all $n \geq 2$ we have $f_n(1/2) = 1/2$, since $\pi$ is pivotal iff $1$ appears before $1/2$ in $\pi$. The general formula for $f$ is derived in a similar way.


We leave it to the reader to figure out how $f_n$ measures some Shapley value. The functions $f_n$ are step functions with steps of length $1/2^{n-1}$. They are left-continuous, and are equal to $f$ at the breakpoints.

Saturday, May 18, 2019

sequences and series - arithmetic progression, division and remainder. finding pattern length


When all elements in a arithmetic progression are divided by a constant number k, and are written down the remainder, you'll quickly notice that a series of numbers will appear. simple example


7   42  77  112 147 182 217 252 287 322 357 392 (progression, 35 added each time)

7 42 17 52 27 2 37 12 47 22 57 32 (remainder after division by 60)

When 35 is added again, that gives 392+35=427. The remainder after division by 60 is again 7. The pattern


7   42  17  52  27  2   37  12  47  22  57  32


repeats itself. In this example, this pattern consists of 12 numbers.


Can you calculate the length of that pattern with a formula when you know the constant difference in the progression and the constant number? I tried to figure this out, but failed.


Answer



The period in the example is the smallest number $k$ with the property $$35k\equiv 0 \mod 60$$


We get $$k=\frac{60}{\gcd(35,60)}=\frac{60}{5}=12$$


This way you can find the period in general.


Friday, May 17, 2019

calculus - Are some indefinite integrals impossible to compute or just don't exist?




I've just started working with integrals relatively recently and I am so surprised how much harder they are to compute than derivatives. For example, for something as seemingly simple as $\int e^{ \cos x} dx $ is impossible right? I can't use u-sub since there is no $-\sin(x)$ multiplying the function, also integration by parts seems like it wouldn't work, correct? So does this mean this integral is impossible to compute?


Answer



The indefinite integral of a continuous function always exists. It might not exist in "closed form", i.e. it might not be possible to write it as a finite expression using "well-known" functions. The concept of "closed form" is
somewhat vague, since there's no definite list of which functions are "well-known". A more precise statement is that there are elementary functions whose indefinite integrals are not elementary. For example, the indefinite integral $\int e^{x^2}\; dx$ is not an elementary function, although it can be expressed in terms of a non-elementary special function as $\frac{\sqrt{\pi}}{2} \text{erfi}(x)$.




Your example $\int e^{\cos(x)}\; dx$ is also non-elementary. This can be proven using the Risch algorithm. This one does not seem to have any non-elementary closed form either.


Thursday, May 16, 2019

elementary number theory - Prove that if $n geq 2$, then $sqrt[n]{n}$ is irrational. Hint, show that if $n geq 2$, then $2^{n} > n$.



Prove that if $n \geq 2$, then $\sqrt[n]{n}$ is irrational. Hint, show that if $n \geq 2$, then $2^{n} > n$.



So, my thought process was that I could show that $2^{n} > n$ using induction, but I'm not sure how that helps to solve the original problem. Showing $2^{n} > n$ means I could take the $n^{th}$ root of each side, giving me $2 > \sqrt[n]{n}$, but that's not quite showing that it's irrational.



My other thought was proof by contradiction, I could start assuming that it's rational -- ie that there exists some integers $r$ and $s$ such that $\sqrt[n]{n} = r/s$. Then I could say that $n = r^{n} / s^{n}$. But I have no way of showing that this is impossible for any $n \geq 2$.




Any hints as to what a good direction to start in would be appreciated.


Answer



Suppose there is a rational number $a/b$ such that gcd$(a,b)=1$ and $a^n/b^n=n.$ Then $a^n=b^nn.$ So $a^n$ divides $b^nn$. Since gcd$(a,b)=1 \implies $ gcd$(a^n,b^n)=1$, we know $a^n$ must divide $n$. Thus, $n \geq a^n$. But note that the integer $a$ must be at least $2$ since $(1/b)^n$ clearly cannot be $n$. Combining with the hint gives a contradiction.


real analysis - Relation between roots of function and roots of its derivative



I'm reading a section my Calculus book that is about the relation between the roots of a polynomial function and the roots of its derivative. So:





Notice that if $x_1$ and $x_2$ are roots of $f$, so that $f(x_1) = f(x_2) = 0$, then by Rolle's Theorem there is a number $x$ between $x_1$ and $x_2$ such that $f'(x) = 0$.




Ok that makes sense. Then:




This means that if $f$ has $k$ different roots $x_1 < x_2 < ... < x_k$, then $f'$ has at least least $k-1$ roots: one between $x_1$ and $x_2$, one between $x_2$ and $x_3$, etc.




That also makes sense, but what confuses me is "at least least $k-1$ roots". Why "at least"? Didn't we just show that there are exactly $k-1$ roots for the derivative, or so to say, if we have a polynomial of degree $n$, then its derivative has $n-1$ roots?



Answer



If $p(x)=x^2+1$, then $p(x)$ has zero roots. However, $p'(x)=2x$, so $p'(x)$ has one root $x=0$.



That argument from your Calculus textbook proves that between any two roots of the original polynomial $p(x)$ there is at least one root of $p'(x)$ between them, but there may be other roots.


calculus - How do I solve this limit? $lim_{x to frac{pi}{4}}frac{sin x-cos x}{ln(tan x)}$



I can't really see the right way to solve this limit. My attempt is:
$$\lim_{x \to \frac{\pi}{4}}\frac{\sin x-\cos x}{\ln(\tan x)}=\left(\lim_{x \to \frac{\pi}{4}}\frac{\sin x-\cos x}{\ln(\tan x)}\right):\cos x = \lim_{x \to \frac{\pi}{4}}\frac{\tan x-1}{\frac{\ln(\tan x)}{\cos x}} = \frac{0}{\frac{0\cdot{2}}{0\cdot\sqrt{2}}}$$ So this answer is wrong.

But I would like to understand how to solve this problem without using derivation or L'hôpital's rule.


Answer



Hint: your expression can be transformed as $\tan x-1\over \log (1+\tan x -1)$



Now take $y=\tan x -1$ so $y\to 0$ as $x\to{\pi\over 4}$



so now in $y$ variable $\lim_{y\to 0}{y\over \log(1+y)}=\lim_{y\to 0}{1\over{\log(1+y)\over y}}=1$


Wednesday, May 15, 2019

integration - Where am I going Wrong in this Polar Coordinate Conversion?



Solve the following double integral by converting to polar coordinates first:



$\int_{0}^{2}\int_{0}^{\sqrt{4-x^2}}(x^2+y^2)^{3/2}dydx$



My attempt at a solution:




$\int\int_{R}dydx$(Cartesian) = $\int\int_{R}rdrd\theta$(Polar)



$x=rcos\theta, y=rsin\theta$



$y=\sqrt{4-x^2} $ ---> $x^2 + y^2 = 4 $ ∴ $\int_{r=0}^{2}$



Because we can only use the top half of the circle, $\int_{\theta=0}^{\pi}$



Therefore, the overall integral I arrive at is:




$\int_{0}^{\pi}\int_{0}^{2}(r^2)^{3/2}rdrd\theta$,



Which simplifies down to:



$\int_{0}^{\pi}\int_{0}^{2}r^4drd\theta$



Solving this, I get an answer of $\frac{32\pi}{5}$. The answer in the book, however, is $\frac{16\pi}{5}$. What am I doing wrong? Where am I ending up with an answer twice as big as it should be?


Answer



You should only be using the upper left quadrant of the circle. $x$ ranges from $0$ to $2$; to get the full top half, it would need to range from $-2$ to $2$. I find it usually helps to draw the full region before starting any coordinate transforms.


elementary set theory - Infinite decimals to prove that intervals are equipotent

Use infinite decimals to prove that the real interval(0,10) is equipotent to its subset, (0,1).


I've deduced through infinite decimals that the interval (0,1) is uncountable, but I'm unsure how to prove that it is as uncountable as the interval (0,10). Can I just say that (0,1) $\subset$ (0,10), thus (0,10) is also uncountable, but this does not prove they're equipotent?

Tuesday, May 14, 2019

How to find the sum of the first 21 terms of an Arithmetic Progression(A.P)?


Question. If the sum of first $12$ terms of an A.P. is equal to to the sum of the first $18$ terms of the same A.P., find the sum of the first $21$ terms of the same A.P.




$a=$ first term



$d=$ difference




I know now, $2(2a + 11d) = 3(2a + 17d)$



hence, $2a + 29d = 0$ --> correction



How do I proceed from here?

calculus - Finding $int_{-infty}^infty frac{x^2}{x^4+1};dx$

$$\int_{-\infty}^\infty \frac{x^2}{x^4+1}\;dx$$



I'm trying to understand trigonometric substitution better, because I never could get a good handle on it. All I know is that this integral is supposed to reduce to the integral of some power of cosine. I tried $x^2=\tan\theta$, but I ended up with $\sin\theta\cos^3\theta$ as my integrand. Can someone explain how to compute this?

summation - Proof by induction: showing that two sums are equal





usually the tasks look like



$$\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6}$$




or



$$\sum_{i=0}^n i^2 = i_1^2 + i_2^2 + i_3^2+...+i_n^2$$



But for the following task I have this form:



$$\left(\sum_{k=1}^n k\right)^2 = \sum_{k=1}^nk^3 $$



First I am a little confused by how I approach this. Do I transform them into a complete term like in the first example? Or can I do it by just using the sums themselves? And how should I treat the square of the sum best?




The first step is pretty straight forward creating the base case. But as soon as I get into doing the "Induction Step" I am getting stuck and not sure how to proceed.



I am looking to know best practice for this case.



Edit:
This question is a little different, since it is expected to proove this only by using complete induction using the sum notation.


Answer



Assume that $\displaystyle\left(\sum_{k=1}^n k\right)^2 = \sum_{k=1}^nk^3$ holds for $n.$ We want to show that $\displaystyle\left(\sum_{k=1}^{n+1} k\right)^2 = \sum_{k=1}^{n+1}k^3.$ How to do it? Note that



$$\begin{align}\left(\sum_{k=1}^{n+1} k\right)^2&=\left(\sum_{k=1}^{n} k+n+1\right)^2\\&= \color{blue}{\left(\sum_{k=1}^{n} k\right)^2}+2(n+1)\sum_{k=1}^nk+(n+1)^2\\&\underbrace{=}_{\rm{induction}\:\rm{hypothesis}}\color{blue}{\sum_{k=1}^nk^3}+\color{red}{2(n+1)\sum_{k=1}^nk+(n+1)^2}\\&=\sum_{k=1}^{n+1}k^3\end{align}$$ if and only if $\displaystyle(n+1)^3=2(n+1)\sum_{k=1}^nk+(n+1)^2.$ Show this equality and you are done.



algebra precalculus - What is the value of the expression: $(1+frac 12)(1+frac 13)(1+frac 14)...(1+frac {1}{2004})(1+frac {1}{2005})$?

What is the value of the expression: $(1+\frac 12)(1+\frac 13)(1+\frac 14)...(1+\frac {1}{2004})(1+\frac {1}{2005})$? This question appeared on the UKMT senior maths challenge 2005, and I can't find an adequate method for a solution given its context i.e. without a calculator, and it should only take 3 to 4 minutes. I'm sure I'm missing something obvious.
Thanks!

probability - Expected number of rolls to get all 6 numbers of a die




I've been thinking about this for a while but don't know how to go about obtaining the answer. Find the expectation of the times of rolls if u want to get all 6 numbers when rolling a 6 sided die.




Thanks


Answer



Intuitively



You first roll a die. You get some value. Now, the probability of getting a different value on the next one is $\frac{5}{6}$ . Hence the expected value for different values in two rolls is $\frac{6}{5}$. You continue this and get



$\frac{6}{6} + \frac{6}{5} + \frac{6}{4} + \frac{6}{3} + \frac{6}{2} + \frac{6}{1} = \color{blue}{14.7}$ rolls


Monday, May 13, 2019

number theory - Can $sqrt{n} + sqrt{m}$ be rational if neither $n,m$ are perfect squares?



Can the expression $\sqrt{n} + \sqrt{m}$ be rational if neither $n,m \in \mathbb{N}$ are perfect squares? It doesn't seem likely, the only way that could happen is if for example $\sqrt{m} = a-\sqrt{n}, \ \ a \in \mathbb{Q}$, which I don't think is possible, but how to show it?


Answer



Squaring we get, $m=a^2+n-2a\sqrt n\implies \sqrt n=\frac{a^2+n-m}{2a}$ which is rational


Trying to correctly write the proof using *strong* induction of the sum of the nth positive integer

I'm learning about proofs using induction and our professor want us to always use strong induction when giving proofs. In my understanding, strong induction is used to show the range of numbers you have shown to be true. I want to be able to write a complete and clear proof. I'm not really understanding how to correctly write the inductive hypothesis for this problem: for all natural numbers n, prove 1+2+3,+...+ n = n(n+1)/2.



I already showed the base case is true when n = 1. But for the inductive step I want to know if I did it right or not. Even if its correct, is there anything I can add or remove to make it clearer? This is what I have so far:



Inductive hypothesis:
If n <= k and n >= 1, assume 1+2+3+...+k = k(k+1)/2 is true for 1 <= n <= k.
Prove 1+2+3+...+ k+(k+1) = (k+1)(k+1+1)/2.




I know how to show the proof after this, I just want to understand how to write the inductive hypothesis. Thanks for any help.

Sunday, May 12, 2019

arithmetic - Are the basic properties of algebra just axioms?

I was reading online and found resources that claim that the following properties are just axioms:


  • Commutative Property of Addition and Multiplication

  • Associative Property of Addition and Multiplication


I was wondering if it is true that these properties are just axioms (i.e. they do not have proofs). Specifically, if they are axioms, I am curious as to how they were "discovered." For instance, $a * (b * c) = (a * b) * c$ has been taught to me from a very young age, so it is almost habitual. How were mathematicians of the past able to reason that this property (and the other properties) is true with complete certainty? Thank you!


**Please note that I am not looking for examples of these properties being true. I understand that 5 * (3 * 4) = (5 * 3) * 4. I am wondering how the logic behind the generalization of these properties came about.

paradoxes - What exactly is the paradox in Zeno's paradox?

I have known about Zeno's paradox for some time now, but I have never really understood what exactly the paradox is. People always seem to have different explainations.



From wikipedia:




In the paradox of Achilles and the Tortoise, Achilles is in a footrace with the tortoise. Achilles allows the tortoise a head start of 100 metres, for example. If we suppose that each racer starts running at some constant speed (one very fast and one very slow), then after some finite time, Achilles will have run 100 metres, bringing him to the tortoise's starting point. During this time, the tortoise has run a much shorter distance, say, 10 metres. It will then take Achilles some further time to run that distance, by which time the tortoise will have advanced farther; and then more time still to reach this third point, while the tortoise moves ahead. Thus, whenever Achilles reaches somewhere the tortoise has been, he still has farther to go. Therefore, because there are an infinite number of points Achilles must reach where the tortoise has already been, he can never overtake the tortoise.
__



And we then say that this is a paradox since he should be able to reach the tortoise in finite time? For me it seems like that in the paradox we are slowing down time proportionally. Aren't we then already using the fact that the sum of those "time sequences" make up finite time? I feel like there is some kind of circular logic involved here.



What exactly is the paradox?

calculus - Evaluate $int_{0}^{+infty }{left( frac{x}{{{text{e}}^{x}}-{{text{e}}^{-x}}}-frac{1}{2} right)frac{1}{{{x}^{2}}}text{d}x}$



Evaluate :
$$\int_{0}^{+\infty }{\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x}$$


Answer




Related technique. Here is a closed form solution of the integral



$$\int_{0}^{+\infty }{\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x} = -\frac{\ln(2)}{2}. $$



Here is the technique, consider the integral



$$ F(s) = \int_{0}^{+\infty }{e^{-sx}\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x}, $$



which implies




$$ F''(s) = \int_{0}^{+\infty }{e^{-sx}\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\text{d}x}. $$



The last integral is the Laplace transform of the function



$$ \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} $$



and equals



$$ F''(s) = \frac{1}{4}\,\psi' \left( \frac{1}{2}+\frac{1}{2}\,s \right) -\frac{1}{2s}. $$




Now, you need to integrate the last equation twice and determine the two constants of integrations, then take the limit as $s\to 0$ to get the result.


Saturday, May 11, 2019

calculus - What is $int_0^1frac{x^7-1}{log(x)}mathrm dx$?



/A problem from the 2012 MIT Integration Bee is
$$
\int_0^1\frac{x^7-1}{\log(x)}\mathrm dx

$$
The answer is $\log(8)$. Wolfram Alpha gives an indefinite form in terms of the logarithmic integral function, but times out doing the computation. Is there a way to do it by hand?


Answer



$\newcommand{\+}{^{\dagger}}%
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ds}[1]{\displaystyle{#1}}%

\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%

\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert #1 \right\vert}%
\newcommand{\yy}{\Longleftrightarrow}$
$\ds{\pp\pars{\mu} \equiv \int_{0}^{1}{x^{\mu} - 1 \over \ln\pars{x}}\,\dd x}$




$$

\pp'\pars{\mu} \equiv \int_{0}^{1}{x^{\mu}\ln\pars{x} \over \ln\pars{x}}\,\dd x
=
\int_{0}^{1}x^{\mu}\,\dd x = {1 \over \mu + 1}
\quad\imp\quad
\pp\pars{\mu} - \overbrace{\pp\pars{0}}^{=\ 0} = \ln\pars{\mu + 1}
$$




$$
\pp\pars{7} = \color{#0000ff}{\large\int_{0}^{1}{x^{7} - 1 \over \ln\pars{x}}

\,\dd x}
=
\ln\pars{7 + 1} = \ln\pars{8} = \color{#0000ff}{\large 3\ln\pars{2}}
$$


calculus - How can I calculate the following integral (partial fractions)?



$ \int_{\frac{3}{2}}^{2} \sqrt\frac{x-1}{3-x}\,dx$



I tried to integrate at first with $u= \sqrt \frac{x-3}{x-1}$ and after it I get a very complicated integration by partial fractions, can I solve it with an other way too? I think it has to be a much easier method, I appreciate any help.


Answer




You idea was very good.



Considering$$I=\int \sqrt\frac{x-1}{3-x}\,dx$$ $$u= \sqrt\frac{x-1}{3-x}\implies x=\frac{3 u^2+1}{u^2+1}\implies dx=\frac{4 u}{\left(u^2+1\right)^2}du$$ So $$I=\int\frac{4 u^2}{\left(u^2+1\right)^2}dt=4\int\frac{du}{u^2+1}-4\int\frac{du}{\left(u^2+1\right)^2}$$ The first one is simple; for the second one, use integration by parts.



I suppose that you did not work enough the simplification of $dx$.


calculus - Why $lim_{xto0}frac{int_0^xfrac{t^2}{t^4+1}dt}{x^3}=lim_{xto0}frac{frac{x^2}{x^4+1}}{3x^2}$?



There is this limit question that I think we need to use the Fundamental Theorem of Calculus, but there is one little doubt during the process.



$$\lim_{x\to0}\frac{1}{x^3}\int_0^x\frac{t^2}{t^4+1}dt$$



My doubt is are we allowed to differentiate on both numerator and denominator so that



$$\lim_{x\to0}\frac{\int_0^x\frac{t^2}{t^4+1}dt}{x^3}=\lim_{x\to0}\frac{\frac{x^2}{x^4+1}}{3x^2}$$




If we are allowed to do that, is it because




  1. $\frac{\int_0^x\frac{t^2}{t^4+1}dt}{x^3}=\frac{\frac{x^2}{x^4+1}}{3x^2}$ (I mean we can differentiate both the numerator and denominator for all fractions in general) or


  2. we can only do that when it involves limit as $x\to0$


  3. In short, is there any theorem that says so (in general or just for some specific cases)?




Many thanks for the help!



Answer



$\lim_{x\to0}\dfrac{\int_0^x\frac{t^2}{t^4+1}dt}{x^3}$



$ = \lim_{x\to0}\dfrac{\frac{d}{dx}\int_0^x\frac{t^2}{t^4+1}dt}{\frac{d}{dx}x^3}$ by L'Hopital because $\int_0^x\frac{t^2}{t^4+1}dt \to 0$ as $x \to 0$



$ = \lim_{x\to0}\dfrac{\left(\frac{x^2}{x^4+1}\right)}{\frac{d}{dx}x^3}$ by the fundamental theorem of calculus



To be precise L'Hopital can only be used when the resulting limit exists, and it does as you can see later, but be sure to check in case it doesn't!


calculus - A doubt in finding the number of real roots of a given polynomial using derivative




I have learnt so far that a polynomial of degree $n$ has $n$ roots. To find out the number of real roots it has, we have to take its derivative, equate it to $0$ and then find the roots. Those roots are the extremes in the graph of the original polynomial. Using those extremes, we can find out how many times the graph of the original polynomial intersects the $X-axis$, so we know the number of real roots. Now, consider a polynomial of degree $3$. Suppose its derivative has no real roots. Does this $necessarily$ mean that the original polynomial has only one real root? Moreover, consider a polynomial of degree $4$. If its second derivative has no real roots, then (taking into account that the first derivative has one real root) will the original polynomial have two real roots? And does this continue for higher degrees?


Answer



Yes, a polynomial of degree$~3$ (or of any odd degree) whose derivative has no real roots defines a strictly monotonic function $\Bbb R\to\Bbb R$, which being bijective has exactly one zero. But no, for a polynomial of degree$~4$ (or of any even degree) all the information you can give about its derivatives will not allow telling whether it has any roots: the answer to that question can be altered by adding a constant to the polynomial, which does not change any of its derivatives.


algebra precalculus - Have I just proven $0=1$?


A long time ago I noticed that $2+2 = 2 \times 2 = 2^2$, which is pretty cool because it’s the 3 basic arithmetical operations. It then recently occurred to me to try to prove that $2$ is the only real number for which this is true. Here is what I came up with:


I start with this double equality:


$r+r = r \times r = r^r$ or rewritten as $r+r = r^2 = r^r$.


Just examining the first part of the double equality:


$$r+r=r^2,$$ I divide by $r$ and get:


$$1+1 = r \implies r=2.$$


Looking at the second part of the double equality:


$$r^2=r^r$$



I divide by $r^2$ and get:


$$1=r^{r-2}$$


Next, I take the logarithm of both sides:


$$\ln(1) = \ln\left(r^{r-2}\right) \implies 0 = (r-2)\ln(r).$$


The only numbers that make this true are $r=2$ and $r=1$, since substituting in any other real number would mean that two non-zero numbers multiplied together would make $0$, which is clearly false. Furthermore $r=1$ does not satisfy the first part of the double equality so it has to be $2$. QED.


I was pretty proud of myself for solving this (and yes I'm sure to most of you this is no big deal but I'm not a math person). However a few hours later a serious problem occurred to me. Going back to this step:


$$0=(r-2)\ln(r).$$


What if I divide both sides by $(r-2)\ln(r)$, then I get:


$$\dfrac{0}{(r-2)\ln(r)} = \dfrac{(r-2)\ln(r)}{(r-2)\ln(r)} \implies 0 = 1.$$


I can't explain this away as division by zero since it's in the numerator.
Can someone tell me what I'm doing wrong?



Thank you


Answer



If $0=(r-2)\ln(r)$, then you can't divide by $(r-2)\ln(r)$, since it is equal to $0$ so you would be dividing by $0$.


More generally, any time you divide by an expression, that step is only valid under the assumption that the expression is not equal to $0$. If the expression involves a variable, this may be true for some values of the variable.


calculus - Prove that $limlimits_{nrightarrowinfty} left(1+frac{1}{a_{n}} right)^{a_{n}}=e$ if $limlimits_{nrightarrowinfty} a_{n}=infty$


What would be the nicest proof of the following theorem:



If $\lim\limits_{n \rightarrow \infty} a_{n} = \infty$, then $\lim\limits_{n \rightarrow \infty} \left(1 + \frac{1}{a_{n}} \right) ^ {a_{n} } = e$.


If $\lim\limits_{n \rightarrow \infty}b_{n} = 0$, then $\lim\limits_{n \rightarrow \infty} \left(1 + b_{n} \right) ^ {\frac {1} {b_{n}} } = e$.



I somehow failed to find a proof here on the website and in the literature.


Answer




Hint:


$$\left(1 + \frac{1}{\lfloor a_n \rfloor+1} \right)^{\lfloor a_n \rfloor} \leqslant \left(1 + \frac{1}{a_n} \right)^{a_n} \leqslant \left(1 + \frac{1}{\lfloor a_n \rfloor} \right)^{\lfloor a_n \rfloor+1}, $$


and


$$\left(1 + \frac{1}{n+1} \right)^n, \left( 1 + \frac{1}{n} \right)^{n+1} \to e$$


Friday, May 10, 2019

calculus - Proving that an additive function $f$ is continuous if it is continuous at a single point



Suppose that $f$ is continuous at $x_0$ and $f$ satisfies $f(x)+f(y)=f(x+y)$. Then how can we prove that $f$ is continuous at $x$ for all $x$? I seems to have problem doing anything with it. Thanks in advance.


Answer



Fix $a\in \mathbb{R}.$


Then


$\begin{align*}\displaystyle\lim_{x \rightarrow a} f(x) &= \displaystyle\lim_{x \rightarrow x_0} f(x - x_0 + a)\\ &= \displaystyle\lim_{x \rightarrow x_0} [f(x) - f(x_0) + f(a)]\\& = (\displaystyle\lim_{x \rightarrow x_0} f(x)) - f(x_0) + f(a)\\ & = f(x_0) -f(x_0) + f(a)\\ & = f(a). \end{align*}$


It follows $f$ is continuous at $a.$


linear algebra - Evaluate determinant of an $n times n$-Matrix



I have the following task:
Let $K$ be a field, $n \in \mathbb{N}$ and $a,b \in K^n$.
Evaluate the determinant of the following matrix:




$$\begin{pmatrix}
a_1+b_1 & b_2 & b_3 & \dots & b_n& \\
b_1 & a_2 + b_2 & b_3 & \dots & b_n \\
b_1 & b_2 & a_3 + b_3 & \dots & b_n \\
\vdots & \vdots & \vdots& & \vdots \\
b_1 & b_2 & b_3 &\dots & a_n + b_n
\end{pmatrix}$$



What I did was expanding it as follows using the Laplace expansion:




$$\det A =(a_1 + b_1) \det\begin{pmatrix} a_2 + b_2 &\dots& b_n \\ b_2 &\dots& b_n \\ \vdots & & \vdots \\ b_2 & \dots & a_n + b_n \end{pmatrix} - b_2 \det\begin{pmatrix} b_1 & b_3 &\dots& b_n \\ b_1 & a_3 + b_3 &\dots& b_n \\ \vdots & \vdots & & \vdots \\ b_1 & b_3 & \dots & a_n + b_n \end{pmatrix} + \ b_3 \det\begin{pmatrix} \dots \end{pmatrix} - \dots (-1)^{n+1}b_n \det\begin{pmatrix} b_1 &\dots& b_{n-1} \\ b_1 &\dots& b_{n-1} \\ \vdots & & \vdots \\ b_1 & \dots & b_{n-1} \end{pmatrix}$$



And before I expand the rest of those determinants and fill 20 papers with a's and b's I'd like to ask for advice. Is this the right way? And when I think about it I don't really see any simplification that is possible when I have finally expanded everything to a point where I could use Cramers rule.
It just came to my mind that I could also expand using the Lapace rule by iterating through the rows instead of the columns. By doing that I'd be able to factor out all of those $b_1$...



NOTE: I am not allowed to use the Sylverster Determinant Theorem



Thank you very much for your help.




FunkyPeanut


Answer



$$\begin{array}{ll}
D_n&=\begin{vmatrix}
a_n+b_n & b_{n-1} & b_{n-2} & \dots & b_1& \\
b_n & a_{n-1} + b_{n-1} & b_{n-2} & \dots & b_1 \\
b_n & b_{n-1} & a_{n-2} + b_{n-2} & \dots & b_1 \\
\vdots & \vdots & \vdots& & \vdots \\
b_n & b_{n-1} & b_{n-2} &\dots & a_1 + b_1
\end{vmatrix}\\

&=\begin{vmatrix}
a_n & b_{n-1} & b_{n-2} & \dots & b_1& \\
0 & a_{n-1} + b_{n-1} & b_{n-2} & \dots & b_1 \\
0 & b_{n-1} & a_{n-2} + b_{n-2} & \dots & b_1 \\
\vdots & \vdots & \vdots& & \vdots \\
0 & b_{n-1} & b_{n-2} &\dots & a_1 + b_1
\end{vmatrix}\\
&+\begin{vmatrix}
b_n & b_{n-1} & b_{n-2} & \dots & b_1& \\
b_n & a_{n-1} + b_{n-1} & b_{n-2} & \dots & b_1 \\

b_n & b_{n-1} & a_{n-2} + b_{n-2} & \dots & b_1 \\
\vdots & \vdots & \vdots& & \vdots \\
b_n & b_{n-1} & b_{n-2} &\dots & a_1 + b_1
\end{vmatrix}\\
&=a_nD_{n-1}+b_n\begin{vmatrix}
1 & b_{n-1} & b_{n-2} & \dots & b_1& \\
1 & a_{n-1} + b_{n-1} & b_{n-2} & \dots & b_1 \\
1 & b_{n-1} & a_{n-2} + b_{n-2} & \dots & b_1 \\
\vdots & \vdots & \vdots& & \vdots \\
1 & b_{n-1} & b_{n-2} &\dots & a_1 + b_1

\end{vmatrix}\\
&=a_nD_{n-1}+b_n\begin{vmatrix}
1 & 0 & 0 & \dots & 0 & \\
1 & a_{n-1} & 0 & \dots & 0 \\
1 & 0 & a_{n-2} & \dots & 0 \\
\vdots & \vdots & \vdots& & \vdots \\
1 & 0 & 0 &\dots & a_1
\end{vmatrix}\\
&= a_n D_{n-1}+b_n\prod_{k=1}^{n-1}a_k
\end{array}$$



analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...