Thursday, October 31, 2019

derivatives - Inverse function of $y=2x+sin x$


I was doing a long exercise when come to this point: calculate the inverse function of $y=2x+\sin x (x \in\mathbb R) $ and its derivative. I know that the derivative of an inverse function is $1/f'(x)$ but it is not enough as $x=f^{-1}(y)$. So I tried to find the inverse function but I'm completely stuck just in this point. Can somebody help me please? Thanks in advance for your help!!


Answer



Hint: write $x=2y+\sin(y)$, then $1=2y'+\cos(y)\cdot y'$, hence $y'=1/(2+\cos(y))$, that will be the derivative of the inverse. If you can solve that differential equation you'll have the inverse ...


sequences and series - How do i evaluate this :$sum_{k=1}^{+infty}frac{{(-1)^k}}{sqrt{k+1}+sqrt{k}}$?

I have tried to evaluate the bellow sum using some standard altern sum but i don't succed then my question here is :






Question:
How do i evaluate this sum





$\sum_{k=1}^{+\infty}\frac{{(-1)^k}}{\sqrt{k+1}+\sqrt{k}}$ ?



Note: Wolfram alpha show it's values here

calculus - Computing $lim_{x to infty} x biggl[ frac{1}{e} - left( frac{x}{x+1} right)^x biggr]$

I'm having some trouble solving finding the limit of:



$$\lim_{x \to \infty} x \left[ \frac{1}{e} - \left( \frac{x}{x+1} \right)^x \right]$$



I can see that $\left( \frac{x}{x+1} \right)^x = \left( 1 + \frac{-1}{x+1} \right)^x \rightarrow \frac{1}{e}$. That's why I thought the limit is 0. However, according to WolframAlpha it's $-\frac{1}{2e}$.


I tried to write $\frac{1}{e} - \left( \frac{x}{x+1} \right)^x$ as a series but I didn't find a way to...


What's the right approach to find this limit?

real analysis - How to define a bijection between $(0,1)$ and $(0,1]$?




How to define a bijection between $(0,1)$ and $(0,1]$? Or any other open and closed intervals?



If the intervals are both open like $(-1,2)\text{ and }(-5,4)$ I do a cheap trick (don't know if that's how you're supposed to do it): I make a function $f : (-1, 2)\rightarrow (-5, 4)$ of the form $f(x)=mx+b$ by \begin{align*} -5 = f(-1) &= m(-1)+b \\ 4 = f(2) &= m(2) + b \end{align*} Solving for $m$ and $b$ I find $m=3\text{ and }b=-2$ so then $f(x)=3x-2.$


Then I show that $f$ is a bijection by showing that it is injective and surjective.


Answer



Choose an infinite sequence $(x_n)_{n\geqslant1}$ of distinct elements of $(0,1)$. Let $X=\{x_n\mid n\geqslant1\}$, hence $X\subset(0,1)$. Let $x_0=1$. Define $f(x_n)=x_{n+1}$ for every $n\geqslant0$ and $f(x)=x$ for every $x$ in $(0,1)\setminus X$. Then $f$ is defined on $(0,1]$ and the map $f:(0,1]\to(0,1)$ is bijective.


To sum up, one extracts a copy of $\mathbb N$ from $(0,1)$ and one uses the fact that the map $n\mapsto n+1$ is a bijection between $\mathbb N\cup\{0\}$ and $\mathbb N$.


elementary number theory - How to solve this congruence $17x equiv 1 pmod{23}$?




Given $17x \equiv 1 \pmod{23}$



How to solve this linear congruence?
All hints are welcome.



edit:
I know the Euclidean Algorithm and know how to solve the equation $17m+23n=1$
but I don't know how to compute x with the use of m or n.


Answer



To do modular division I do this:




an - bm = c where c is dividend, b is modulo and a is divisor, then n is quotient



17n - 23m = 1



Then using euclidean algorithm, reduce to gcd(a,b) and record each calculation



As described by http://mathworld.wolfram.com/DiophantineEquation.html



17 23 $\quad$ 14 19




17 6 $\quad\;\;$ 14 5



11 6 $\quad\;\;\;\;$ 9 5



5 6 $\quad\;\;\;\;\;$ 4 5



5 1 $\quad\;\;\;\;\;$ 4 1



1 1 $\quad\;\;\;\;\;$ 0 1




Left column is euclidean algorithm, Right column is reverse procedure



Therefore $ 17*19 - 23*14 = 1$, i.e. n=19 and m=14.



The result is that 1/17 ≡ 19 mod 23



this method might not be as quick as the other posts, but this is what I have implemented in code. The others could also be, but I thought I would share my method.


Wednesday, October 30, 2019

intuition - Dominoes and induction, or how does induction work?




I've never really understood why math induction is supposed to work.



You have these 3 steps:




  1. Prove true for base case (n=0 or 1 or whatever)


  2. Assume true for n=k. Call this the induction hypothesis.


  3. Prove true for n=k+1, somewhere using the induction hypothesis in your proof.





In my experience the proof is usually algebraic, and you just manipulate the problem until you get the induction hypothesis to appear. If you can do that and it works out, then you say the proof holds.



Here's one I just worked out,



Show $\displaystyle\lim_{x\to\infty} \frac{(\ln x)^n}{x} = 0$



So you go:




  1. Use L'Hospital's rule.

    $\displaystyle\lim_{x\to\infty} \frac{\ln x}{x} = 0$.
    Since that's
    $\displaystyle\lim_{x\to\infty} \frac{1}{x} = 0$.


  2. Assume true for $n=k$.
    $\displaystyle\lim_{x\to\infty} \frac{(\ln x)^k}{x} = 0$.


  3. Prove true for $n=k+1$. You get
    $\displaystyle\lim_{x\to\infty} \frac{(\ln x)^{k+1}}{x} = 0.$




Use L'Hospital again:

$\displaystyle\lim_{x\to\infty} \frac{(k+1)(\ln x)^{k}}{x} = 0$.



Then you see the induction hypothesis appear, and you can say this is equal to $0$.



What I'm not comfortable with is this idea that you can just assume something to be true ($n=k$), then based on that assumption, form a proof for $n=k+1$ case.



I don't see how you can use something you've assumed to be true to prove something else to be true.


Answer



The inductive step is a proof of an implication: you are proving that if the property you want holds for $k$, then it holds for $k+1$.




It is a result of formal logic that if you can prove $P\rightarrow Q$ (that $P$ implies $Q$), then from $P$ you can prove $Q$; and conversely, that if from assuming that $P$ is true you can prove $Q$, then you can in fact prove $P\rightarrow Q$.



We do this pretty much every time we prove something. For example, suppose you want to prove that if $n$ is a natural number, then $n^2$ is a natural number. How do we start? "Let $n$ be a natural number." Wait! Why are you allowed to just assume that you already have a natural number? Shouldn't you have to start by proving it's a natural number? The answer is no, we don't have to, because we are not trying to prove an absolute, we are trying to prove a conditional statement: that if $n$ is a natural number, then something happens. So we may begin by assuming we are already in the case where the antecedent is true. (Intuitively, this is because if the antecedent is false, then the implication is necessarily true and there is nothing to be done; formally, it is because the Deduction Theorem, which is what I described above, tells you that if you manage to find a formal proof that ends with "$n^2$ is a natural number" by assuming that "$n$ is a natural number" is true, then you can use that proof to produce a formal proof that establishes the implication "if $n$ is a natural number then $n^2$ is a natural number"; we don't have to go through the exercise of actually producing the latter proof, we know it's "out there").



We do that in Calculus: "if $\lim\limits_{x\to x_0}f(x) = a$ and $\lim\limits_{x\to x_0}g(x) = b$, then $\lim\limits_{x\to x_0}(f(x)+g(x)) = a+b$." How do we prove this? We begin by assuming that the limit of $f(x)$ as $x\to x_0$ is $a$, and that the limit of $g(x)$ as $x\to x_0$ is $b$. We assume the premise/antecedent, and procede to try to prove the consequent.



What this means in the case of induction is that, since the "Inductive Step" is actually a statement that says that an implication holds:
$$\mbox{"It" holds for $k$}\rightarrow \mbox{"it" holds for $k+1$},$$
then in order to prove this implication we can begin by assuming that the antecedent is already true, and then procede to prove the consequent. Assuming that the antecedent is true is precisely the "Induction Hypothesis".




When you are done with the inductive step, you have in fact not proven that it holds for any particular number, you have only shown that if it holds for a particular number $k$, then it must hold for the next number $k+1$. It is a conditional statement, not an absolute one.



It is only when you combine that conditional statement with the base, which is an absolute statement that says "it" holds for a specific number, that you can conclude that the original statement holds for all natural numbers (greater than or equal to the base).



Since you mention dominoes in your title, I assume you are familiar with the standard metaphor of induction like dominoes that are standing all in a row falling. The inductive step is like arguing that all the dominos will fall if you topple the first one (without actually toppling it): first, you argue that each domino is sufficiently close to the next domino so that if one falls, then the next one falls. You are not tumbling every domino. And when you argue this, you argue along the lines of "suppose this one falls; since it's length is ...", that is, you assume it falls in order to argue the next one will then fall. This is the same with the inductive step.



In a sense you are right that if feels like "cheating" to assume what you want; but the point is that you aren't really assuming what you want. Again, the inductive step does not in fact establish that the result holds for any number, it only establishes a conditional statement. If the result happens to hold for some $k$, then it would necessarily have to also hold for $k+1$. But we are completely silent on whether it actually holds for $k$ or not. We are not saying anything about that at the inductive-step stage.





Added: Here's an example to emphasize that the "inductive step" does not make any absolute statement, but only a conditional statement: Suppose you want to prove that for all natural numbers $n$, $n+1 = n$.


Inductive step. Induction Hypothesis: The statement holds for $k$; that is, I'm assuming that $k+1 = k$.



To be proven: The statement holds for $k+1$. Indeed: notice that since $k+1= k$, then adding one to both sides of the equation we have $(k+1)+1 = k+1$; this proves the statement holds for $k+1$. QED



This is a perfectly valid proof! It says that if $k+1=k$, then $(k+1)+1=k+1$. This is true! Of course, the antecedent is never true, but the implication is. The reason this is not a full proof by induction of a false statement is that there is no "base"; the inductive step only proves the conditional, nothing more.






By the way: Yes, most proofs by induction that one encounters early on involve algebraic manipulations, but not all proofs by induction are of that kind. Consider the following simplified game of Nim: there are a certain number of matchsticks, and players alternate taking $1$, $2$, or $3$ matchsticks every turn. The person who takes the last matchstick wins.




Proposition. In the simplified game above, the first player has a winning strategy if the number of matchsticks is not divisible by $4$, and the second player has a winning strategy if the number of matchsticks is divisible by 4.



The proof is by (strong) induction, and it involves no algebraic manipulations whatsoever.


statistics - Probability that P people will have N distinct birthdays



This question is rather difficult to describe clearly, so I will begin with an example. Suppose I have a 365 people in a room. The odds are very low that all these people have different birthdays. In fact, ignoring leap years and assuming that every birthday is equally likely, I estimated (using a program) that the most likely scenario is 231 distinct birthdays (with a probability of ~6.69%).




More abstractly, out of P people, what are the odds that there are exactly N distinct birthdays amongst them (ignoring leap years and assuming that every birthday is equally likely)? In my above example I used P=365 and I estimated the answer for N=231. I am curious if there is a general solution to the problem which could give an exact answer for all N and P.



For those who are interested, the program I created tries random combinations of birthdays and counts the number of unused birthdays. With it, I created a table of estimated probabilities for all N for P=365. Here is the most interesting part of the table:



242 - 1.1837375%
241 - 1.5966975%
240 - 2.0933325%
239 - 2.6695062%
238 - 3.3059700%

237 - 3.9824950%
236 - 4.6610425%
235 - 5.2969675%
234 - 5.8606362%
233 - 6.2995763%
232 - 6.5841737%
231 - 6.6932175%
230 - 6.6143263%
229 - 6.3563137%
228 - 5.9308837%

227 - 5.3896263%
226 - 4.7588988%
225 - 4.0879863%
224 - 3.4110800%
223 - 2.7697687%
222 - 2.1889950%
221 - 1.6793675%
220 - 1.2546263%

Answer




There are $\binom{365}{N}$ ways to select the $N$ birthdays. Then there are $P \brace N$ ways to select select which of the birthdays each person will have so that every of the selected birthdays is indeed the birthday of at least one person.



All in all there are $\binom{365}{N}{P \brace N}N!$ outcomes that give exactly $N$ distinct birthdays.



Hence the probability is:



$$\dfrac{\binom{365}{N}{P \brace N}N!}{365^P}$$







Here $\binom{n}{k}$ is the binomial coefficient and $n\brace k$ is the stirling number of the second kind.


exponentiation - What are exponents? Idea behind exponents(complex or real)?

I recently came through an article which said that $e^{\iota x}$ means that a $e$ gradually increases every moment by a factor of $\iota x$ perpendicular to the real part which I took as a force of $\iota x$ is applied perpendicular to a string of length $e$ same as in circular motion but the expression $e^{\iota x}$ has radius $1$ instead of $e$ in complex plane.
I'm very confused with the idea behind exponents.

integration - Integral $int_0^frac{pi}{2} x^2sqrt{tan x},mathrm dx$



Last year I wondered about this integral:$$\int_0^\frac{\pi}{2} x^2\sqrt{\tan x}\,\mathrm dx$$
That is because it looks very similar to this integral
and this one. Surprisingly the result is quite nice and an approach can be found here.
$$\boxed{\int_0^\frac{\pi}{2} x^2\sqrt{\tan x}\,\mathrm dx=\frac{\sqrt{2}\pi(5\pi^2+12\pi\ln 2 - 12\ln^22)}{96}}$$




Although the approach there is quite skillful, I believed that an elementary approach can be found for this integral.



Here is my idea. First we will consider the following two integrals: $$I=\int_0^\frac{\pi}{2} x^2\sqrt{\tan x}\,\mathrm dx \,;\quad J=\int_0^\frac{\pi}{2} x^2\sqrt{\cot x}\,\mathrm dx$$
$$\Rightarrow I=\frac12 \left((I-J)+(I+J)\right)$$
Thust we need to evaluate the sum and the difference of those two from above.



I also saw from here that the "sister" integral differs only by a minus sign: $$\boxed{\int_0^\frac{\pi}{2} x^2\sqrt{\cot x}\,\mathrm dx=\frac{\sqrt{2}\pi(5\pi^2-12\pi\ln 2 - 12\ln^22)}{96}}$$
Thus using those two boxed answer we expect to find: $$I-J=\frac{\pi^2 \ln 2}{2\sqrt 2};\quad I+J=\frac{5\pi^3}{24\sqrt 2}-\frac{\pi \ln^2 2}{2\sqrt 2}\tag1$$







$$I-J=\int_0^\frac{\pi}{2} x^2\left(\sqrt{\tan x}-\sqrt{\cot x}\right)\,\mathrm dx=\sqrt 2\int_0^\frac{\pi}{2} x^2 \cdot \frac{\sin x-\cos x}{\sqrt{\sin (2x)}}dx$$
$$=-\sqrt 2\int_0^\frac{\pi}{2} x^2 \left(\operatorname{arccosh}(\sin x+\cos x) \right)'dx=2\sqrt 2 \int_0^\frac{\pi}{2} x\operatorname{arccosh} (\sin x+\cos x)dx$$
Let us also denote the last integral with $I_1$ and do a $\frac{\pi}{2}-x=x$ substitution:
$$I_1=\int_0^\frac{\pi}{2} x\operatorname{arccosh} (\sin x+\cos x)dx=\int_0^\frac{\pi}{2} \left(\frac{\pi}{2}-x\right)\operatorname{arccosh} (\sin x+\cos x)dx$$
$$2I_1=\frac{\pi}{2} \int_0^\frac{\pi}{2} \operatorname{arccosh} (\sin x+\cos x)dx\Rightarrow I-J=\frac{\pi}{\sqrt 2}\int_0^\frac{\pi}{2} \operatorname{arccosh} (\sin x+\cos x)dx$$



By using $(1)$ we can easily deduce that: $$\bbox[10pt,#000, border:2px solid green ]{\color{orange}{\int_0^\frac{\pi}{2} \operatorname{arccosh} (\sin x+\cos x)dx=\frac{\pi}{2}\ln 2}}$$







Doing something similar for $I+J$ we get:
$$I+J=\int_0^\frac{\pi}{2} x^2\left(\sqrt{\tan x}+\sqrt{\cot x}\right)\,\mathrm dx=\sqrt 2\int_0^\frac{\pi}{2} x^2 \cdot \frac{\sin x+\cos x}{\sqrt{\sin (2x)}}dx$$
$$=\sqrt 2 \int_0^\frac{\pi}{2} x^2 \left( \arcsin \left(\sin x-\cos x\right)\right)'dx=\frac{\pi^3 \sqrt 2}{8}-2\sqrt 2 \int_0^\frac{\pi}{2} x \arcsin \left(\sin x-\cos x\right)dx$$



Unfortunately, we're not lucky this time and the substitution used for $I-J$ doesn't help in this case.
Of course using $(1)$ we can again deduce that:
$$\bbox[10pt,#000, border:2px solid green ]{\color{red}{\int_0^\frac{\pi}{2} x \arcsin \left(\sin x-\cos x\right)dx=\frac{\pi^3}{96}+\frac{\pi}{8}\ln^2 2}}$$







In the meantime I found a way for the first one, mainly using: $$\frac{\arctan x}{x}=\int_0^1 \frac{dy}{1+x^2y^2}$$ Let us denote: $$I_1=\int_0^\frac{\pi}{2} \operatorname{arccosh} (\sin x+\cos x)dx\overset{IBP}= \int_0^\frac{\pi}{2} x \cdot \frac{\sin x-\cos x}{\sqrt{\sin(2x)}}dx$$
$$\overset{\tan x\rightarrow x}=\frac{1}{\sqrt 2}\int_0^\infty \frac{\arctan x}{1+x^2}\frac{x-1}{\sqrt x}dx=\frac1{\sqrt 2}\int_0^\infty \int_0^1 \frac{dy}{1+x^2y^2} \frac{\sqrt x(x-1)}{1+x^2}dx$$
$$=\frac1{\sqrt 2}\int_0^1 \int_0^\infty \frac{1}{1+y^2x^2} \frac{\sqrt x(x-1)}{1+x^2} dxdy$$
$$=\frac{1}{\sqrt 2}\int_0^1 \frac{{\pi}}{\sqrt 2}\left(\frac{2}{y^2-1}-\frac{1}{\sqrt y (y^2-1)}-\frac{\sqrt y}{y^2-1}\right)dy=\frac{\pi}{2}\ln 2$$



Although the integral in the third row looks quite unpleasant, it can be done quite elementary.







Sadly a similar approach for the second one is madness, because we would have:
$$I_2=\int_0^1 \int_0^1 \int_0^\infty \frac{\sqrt x (x+1)}{1+x^2}\frac{1}{1+y^2x^2}\frac{1}{1+z^2x^2} dxdydz$$



But atleast it gives hope that an elementary approach exists.




For this question I would like to see an elementary approach (without relying on special functions) for the second integral (red one).





If possible please avoid contour integration, although this might be
included in elementary.


Answer



On the path of Zacky, the missing part...



Let,



\begin{align}I&=\int_0^{\frac{\pi}{2}}x^2\sqrt{\tan x}\,dx\\
J&=\int_0^{\frac{\pi}{2}}\frac{x^2}{\sqrt{\tan x}}\,dx\\
\end{align}




Perform the change of variable $y=\sqrt{\tan x}$,



\begin{align}I&=\int_0^{\infty}\frac{2x^2\arctan^2\left(x^2\right)}{1+x^4}\,dx\\\\
J&=\int_0^{\infty}\frac{2x^2\arctan^2\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\
\end{align}



\begin{align}
\text{I+J}&=\int_0^{\infty}\frac{2x^2\left(\arctan\left(x^2\right)+\arctan\left(\frac{1}{x^2}\right)\right)^2}{1+x^4}\,dx-4\int_0^{\infty}\frac{x^2\arctan\left(x^2\right)\arctan\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\
&=\frac{\pi^2}{4}\int_0^{\infty}\frac{2x^2}{1+x^4}\,dx-4\int_0^{\infty}\frac{x^2\arctan\left(x^2\right)\arctan\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\

\end{align}



Perform the change of variable $y=\dfrac{1}{x}$,



\begin{align}
\text{K}&=\int_0^{\infty}\frac{2x^2}{1+x^4}\,dx\\
&=\int_0^{\infty}\frac{2}{1+x^4}\,dx\\
\end{align}



Therefore,




\begin{align}
\text{2K}=\int_0^{\infty}\frac{2\left(1+\frac{1}{x^2}\right)}{\left(x-\frac{1}{x}\right)^2+2}\,dx
\end{align}



Perform the change of variable $y=x-\dfrac{1}{x}$,



\begin{align}\text{2K}&=2\int_{-\infty}^{+\infty}\frac{1}{2+x^2}\,dx\\
&=2\left[\frac{1}{\sqrt{2}}\arctan\left(\frac{x}{\sqrt{2}}\right)\right]_{-\infty}^{+\infty}\\
&=2\times \frac{\pi}{\sqrt{2}}

\end{align}



therefore,



\begin{align}
\text{I+J}&=\frac{\pi^3}{4\sqrt{2}}-4\int_0^{\infty}\frac{x^2\arctan\left(x^2\right)\arctan\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\
\end{align}



Let $a>0$,




\begin{align}
\text{K}_1(a)&=\int_0^{\infty}\frac{x^2}{a+x^4}\,dx\\
&=\frac{1}{a}\int_0^{\infty}\frac{x^2}{1+\left(a^{-\frac{1}{4}}x\right)^4}\,dx\\
\end{align}



Perform the change of variable $y=a^{-\frac{1}{4}}x$,



\begin{align}
\text{K}_1(a)&=a^{-\frac{1}{4}}\int_0^{\infty}\frac{x^2}{1+x^4}\,dx\\
&=\frac{a^{-\frac{1}{4}}\pi}{2\sqrt{2}}

\end{align}



In the same manner,



\begin{align}
\text{K}_2(a)&=\int_0^{\infty}\frac{x^2}{1+ax^4}\,dx\\
&=\frac{a^{-\frac{3}{4}}\pi}{2\sqrt{2}}
\end{align}



Since, for $a$ real,




\begin{align}\arctan a=\int_0^1 \frac{a}{1+a^2t^2}\,dt\end{align}



then,



\begin{align}\text{L}&=\int_0^{\infty}\frac{x^2\arctan\left(x^2\right)\arctan\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\
&=\int_0^{\infty}\left(\int_0^1 \int_0^1 \frac{x^2}{(1+u^2x^4)\left(1+\frac{v^2}{x^4}\right)(1+x^4)}\,du\,dv\right)\,dx\\
&=\\
&\int_0^{\infty}\left(\int_0^1\int_0^1 \left(\frac{x^2}{(1-u^2)(1-v^2)(1+x^4)}-\frac{x^2}{1-u^2v^2}\left(\frac{u^2}{(1-u^2)(1+u^2x^4)}+\frac{v^2}{(1-v^2)(v^2+x^4)}\right)
\right)dudv\right)dx\\

&=\int_0^1\int_0^1 \left(\frac{\pi}{2\sqrt{2}(1-u^2)(1-v^2)}-\frac{1}{1-u^2v^2}\left(\frac{u^2\text{K}_2(u^2)}{1-u^2}+\frac{v^2\text{K}_1(v^2)}{1-v^2}\right)\right)dudv\\
&=\frac{\pi}{2\sqrt{2}}\int_0^1\int_0^1 \left(\frac{1}{(1-u^2)(1-v^2)}-\frac{1}{(1-u^2v^2)}\left(\frac{u^{\frac{1}{2}}}{1-u^2}+\frac{v^{\frac{3}{2}}}{1-v^2}\right)\right)dudv\\
&=\pi\int_0^1\left[\frac{\sqrt{v}\left(\text{ arctanh}\left(\sqrt{uv}\right)-\text{ arctan}\left(\sqrt{uv}\right)-\text{ arctanh}\left(uv\right)\right)+\arctan\left(\sqrt{u}\right)+\ln\left(\frac{\sqrt{1+u}}{1+\sqrt{u}}\right)}{2\sqrt{2}(1-v^2)}\right]_{u=0}^{u=1}\,dv\\
&=\frac{\pi}{2\sqrt{2}}\int_0^1\frac{\sqrt{v}\big(\text{ arctanh}\left(\sqrt{v}\right)-\text{ arctan}\left(\sqrt{v}\right)-\text{ arctanh}\left(v\right)\big)+\frac{\pi}{4}-\frac{1}{2}\ln 2}{1-v^2}\,dv\\
&=\frac{\pi}{2\sqrt{2}}\int_0^1\frac{\sqrt{v}\arctan\left(\frac{1-\sqrt{v}}{1+\sqrt{v}}\right)}{1-v^2}\,dv+\frac{\pi}{2\sqrt{2}}\left(\frac{\pi}{4}-\frac{1}{2}\ln 2\right)\int_0^1 \frac{1-\sqrt{v}}{1-v^2}\,dv+\\
&\frac{\pi}{2\sqrt{2}}\int_0^1\frac{\sqrt{v}\ln\left(\frac{1+\sqrt{v}}{2}\right)}{1-v^2}\,dv-\frac{\pi}{4\sqrt{2}}\int_0^1\frac{\sqrt{v}\ln\left(\frac{1+v}{2}\right)}{1-v^2}\,dv
\end{align}



Perform the change of variable $y=\dfrac{1-\sqrt{v}}{1+\sqrt{v}}$,




\begin{align}\text{R}_1&=\int_0^1\frac{\sqrt{v}\arctan\left(\frac{1-\sqrt{v}}{1+\sqrt{v}}\right)}{1-v^2}\,dv\\
&=\frac{1}{2}\int_0^1 \frac{(1-v)^2\arctan v}{v(1+v^2)}\,dv\\
&=\frac{1}{2}\int_0^1 \frac{\arctan v}{v}\,dv-\int_0^1 \frac{\arctan v}{1+v^2}\,dv\\
&=\frac{1}{2}\text{G}-\frac{1}{2}\Big[\arctan^2 v\Big]_0^1\\
&=\frac{1}{2}\text{G}-\frac{\pi^2}{32}\\
\text{R}_2&=\int_0^1 \frac{1-\sqrt{v}}{1-v^2}\,dv\\
&=\left[\ln\left(\frac{\sqrt{1+v}}{1+\sqrt{v}}\right)+\arctan\left(\sqrt{v}\right)\right]_0^1\\
&=\frac{\pi}{4}-\frac{1}{2}\ln 2\\
\end{align}




Perform the change of variable $y=\dfrac{1-\sqrt{v}}{1+\sqrt{v}}$,



\begin{align}\text{R}_3&=\int_0^1\frac{\sqrt{v}\ln\left(\frac{1+\sqrt{v}}{2}\right)}{1-v^2}\,dv\\
&=-\frac{1}{2}\int_0^1\frac{(1-v)^2\ln(1+v)}{v(1+v^2)}\,dv\\
&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv-\frac{1}{2}\int_0^1 \frac{\ln(1+v
)}{v}\,dv\\
&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv-\frac{1}{4}\int_0^1 \frac{2v\ln(1-v^2)}{v^2}\,dv+\frac{1}{2}\int_0^1 \frac{\ln(1-v)}{v}\,dv\\
\end{align}



In the second integral perform the change of variable $y=v^2$,




\begin{align}\text{R}_3&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv+\frac{1}{4}\int_0^1 \frac{\ln(1-v)}{v}\,dv\\
\end{align}



In the second integral perform the change of variable $y=1-v$,



\begin{align}\text{R}_3&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv+\frac{1}{4}\int_0^1 \frac{\ln v}{1-v}\,dv\\
&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv+\frac{1}{4}\times -\zeta(2)\\
&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv-\frac{\pi^2}{24}\\
\end{align}




Perform the change of variable $y=\dfrac{1-v}{1+v}$,



\begin{align}
\text{S}_1&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv\\
&=\int_0^1\frac{\ln(\frac{2}{1+v})}{1+v^2}\,dv\\
&=\ln 2\int_0^1 \frac{1}{1+v^2}\,dv-\text{S}_1\\
&=\frac{\pi}{4}\ln 2-\text{S}_1
\end{align}




Therefore,



\begin{align}
\text{S}_1&=\frac{\pi}{8}\ln 2\\
\text{R}_3&=\frac{\pi}{8}\ln 2-\frac{\pi^2}{24}\\
\end{align}



Perform the change of variable $y=\dfrac{1-\sqrt{v}}{1+\sqrt{v}}$,



\begin{align}

\text{R}_4&=\int_0^1\frac{\sqrt{v}\ln\left(\frac{1+v}{2}\right)}{1-v^2}\,dv\\
&=\frac{1}{2}\int_0^1 \frac{(1-v)^2\ln\left(\frac{1+v^2}{(1+v)^2}\right)}{v(1+v^2)}\,dv\\
&=\frac{1}{2}\int_0^1 \frac{(1-v)^2\ln\left(1+v^2\right)}{v(1+v^2)}\,dv+2\text{R}_3\\
&=\frac{1}{2}\int_0^1\frac{\ln(1+v^2)}{v}\,dv-\int_0^1\frac{\ln(1+v^2)}{1+v^2}\,dv+\frac{\pi}{4}\ln 2-\frac{\pi^2}{12}\\
&=\frac{1}{2}\times \frac{1}{4}\zeta(2)-\int_0^1\frac{\ln(1+v^2)}{1+v^2}\,dv+\frac{\pi}{4}\ln 2-\frac{\pi^2}{12}\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}-\int_0^1\frac{\ln(1+v^2)}{1+v^2}\,dv\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}-\int_0^1\int_0^1\frac{v^2}{(1+v^2)(1+v^2t)}\,dt\,dv\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}-\int_0^1 \left[\frac{\arctan\left(v\right)\sqrt{t}-\arctan\left(v\sqrt{t}\right)}{(t-1)\sqrt{t}}\right]_{v=0}^{v=1}\,dt\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}-\int_0^1 \frac{\frac{\pi\sqrt{t}}{4}-\arctan\left(\sqrt{t}\right)}{(t-1)\sqrt{t}}\,dt\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}+\int_0^1 \frac{\arctan\left(\frac{1-\sqrt{t}}{1+\sqrt{t}}\right)}{(1-t)\sqrt{t}}\,dt-\frac{\pi}{4}\int_0^1 \frac{\sqrt{t}-1}{(t-1)\sqrt{t}}\,dt\\

&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}+\int_0^1 \frac{\arctan\left(\frac{1-\sqrt{t}}{1+\sqrt{t}}\right)}{(1-t)\sqrt{t}}\,dt-\frac{\pi}{4}\Big[2\ln\left(1+\sqrt{t}\right)\Big]_0^1\\
&=\int_0^1 \frac{\arctan\left(\frac{1-\sqrt{t}}{1+\sqrt{t}}\right)}{(1-t)\sqrt{t}}\,dt-\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}\\
\end{align}



Perform the change of variable $y=\dfrac{1-\sqrt{t}}{1+\sqrt{t}}$,



\begin{align}
\text{R}_4&=\int_0^1 \frac{\arctan t}{t}\,dt-\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}\\
&=\text{G}-\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}\\
\end{align}




Therefore,



\begin{align}L&=\frac{\pi}{2\sqrt{2}}\text{R}_1+\frac{\pi}{2\sqrt{2}}\left(\frac{\pi}{4}-\frac{1}{2}\ln 2\right) \text{R}_2+\frac{\pi}{2\sqrt{2}}\text{R}_3-\frac{\pi}{4\sqrt{2}}\text{R}_4\\
&=\frac{\pi}{2\sqrt{2}}\left(\frac{\text{G}}{2}-\frac{\pi^2}{32}\right)+\frac{\pi}{2\sqrt{2}}\left(\frac{\pi}{4}-\frac{1}{2}\ln 2\right)^2+\frac{\pi}{2\sqrt{2}}\left(\frac{\pi}{8}\ln 2-\frac{\pi^2}{24}\right)-\\
&\frac{\pi}{4\sqrt{2}}\left(\text{G}-\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}\right)\\
&=\frac{\pi^3}{96\sqrt{2}}+\frac{\pi\ln^2 2}{8\sqrt{2}}
\end{align}



Thus,

\begin{align}\text{I+J}&=\frac{\pi^3}{4\sqrt{2}}-4\text{L}\\
&=\frac{\pi^3}{4\sqrt{2}}-4\left(\frac{\pi^3}{96\sqrt{2}}+\frac{\pi\ln^2 2}{8\sqrt{2}}\right)\\
&=\boxed{\frac{5\pi^3}{24\sqrt{2}}-\frac{\pi\ln^2 2}{2\sqrt{2}}}
\end{align}


Tuesday, October 29, 2019

elementary number theory - prove that $2^{35}-1$ is divisible by 31 and 127

Can you give me a hint to how to approach the problem.How one can show that $2^{35}-1$ is a multiple of 31 and 127?

integration - Gaussian Integral

Consider the following Gaussian Integral $$I = \int_{-\infty}^{\infty} e^{-x^2} \ dx$$



The usual trick to calculate this is to consider $$I^2 = \left(\int_{-\infty}^{\infty} e^{-x^2} \ dx \right) \left(\int_{-\infty}^{\infty} e^{-y^{2}} \ dy \right)$$



and convert to polar coordinates. We get $\sqrt{\pi}$ as the answer.



Is it possible to get the same answer by considering $I^{3}, I^{4}, \dots, I^{n}$?

real analysis - How discontinuous can a derivative be?



There is a well-known result in elementary analysis due to Darboux which says if $f$ is a differentiable function then $f'$ satisfies the intermediate value property. To my knowledge, not many "highly" discontinuous Darboux functions are known--the only one I am aware of being the Conway base 13 function--and few (none?) of these are derivatives of differentiable functions. In fact they generally cannot be since an application of Baire's theorem gives that the set of continuity points of the derivative is dense $G_\delta$.



Is it known how sharp that last result is? Are there known Darboux functions which are derivatives and are discontinuous on "large" sets in some appropriate sense?



Answer



What follows is taken (mostly) from more extensive discussions in the following sci.math posts:



http://groups.google.com/group/sci.math/msg/814be41b1ea8c024 [23 January 2000]



http://groups.google.com/group/sci.math/msg/3ea26975d010711f [6 November 2006]



http://groups.google.com/group/sci.math/msg/05dbc0ee4c69898e [20 December 2006]



Note: The term interval is restricted to nondegenerate intervals (i.e. intervals containing more than one point).




The continuity set of a derivative on an open interval $J$ is dense in $J.$ In fact, the continuity set has cardinality $c$ in every subinterval of $J.$ On the other hand, the discontinuity set $D$ of a derivative can have the following properties:




  1. $D$ can be dense in $\mathbb R$.


  2. $D$ can have cardinality $c$ in every interval.


  3. $D$ can have positive measure. (Hence, the function can fail to be Riemann integrable.)


  4. $D$ can have positive measure in every interval.


  5. $D$ can have full measure in every interval (i.e. measure zero complement).


  6. $D$ can have a Hausdorff dimension zero complement.



  7. $D$ can have an $h$-Hausdorff measure zero complement for any specified Hausdorff measure function $h.$




More precisely, a subset $D$ of $\mathbb R$ can be the discontinuity set for some derivative if and only if $D$ is an $F_{\sigma}$ first category (i.e. an $F_{\sigma}$ meager) subset of $\mathbb R.$



This characterization of the discontinuity set of a derivative can be found in the following references: Benedetto [1] (Chapter 1.3.2, Proposition, 1.10, p. 30); Bruckner [2] (Chapter 3, Section 2, Theorem 2.1, p. 34); Bruckner/Leonard [3] (Theorem at bottom of p. 27); Goffman [5] (Chapter 9, Exercise 2.3, p. 120 states the result); Klippert/Williams [7].



Regarding this characterization of the discontinuity set of a derivative, Bruckner and Leonard [3] (bottom of p. 27) wrote the following in 1966: Although we imagine that this theorem is known, we have been unable to find a reference. I have found the result stated in Goffman's 1953 text [5], but nowhere else prior to 1966 (including Goffman's Ph.D. Dissertation).



Interestingly, in a certain sense most derivatives have the property that $D$ is large in all of the ways listed above (#1 through #7).




In 1977 Cliff Weil [8] published a proof that, in the space of derivatives with the sup norm, all but a first category set of such functions are discontinuous almost everywhere (in the sense of Lebesgue measure). When Weil's result is paired with the fact that derivatives (being Baire $1$ functions) are continuous almost everywhere in the sense of Baire category, we get the following:



(A) Every derivative is continuous at the Baire-typical point.



(B) The Baire-typical derivative is not continuous at the Lebesgue-typical point.



Note that Weil's result is stronger than simply saying that the Baire-typical derivative fails to be Riemann integrable (i.e. $D$ has positive Lebesgue measure), or even stronger than saying that the Baire-typical derivative fails to be Riemann integrable on every interval. Note also that, for each of these Baire-typical derivatives, $\{D, \; {\mathbb R} - D\}$ gives a partition of $\mathbb R$ into a first category set and a Lebesgue measure zero set.



In 1984 Bruckner/Petruska [4] (Theorem 2.4) strengthened Weil's result by proving the following: Given any finite Borel measure $\mu,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has $\mu$-measure zero.




In 1993 Kirchheim [5] strengthened Weil's result by proving the following: Given any Hausdorff measure function $h,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has Hausdorff $h$-measure zero.



[1] John J. Benedetto, Real Variable and Integration With Historical Notes, Mathematische Leitfäden. Stuttgart: B. G. Teubne, 1976, 278 pages. [MR 58 #28328; Zbl 336.26001]



[2] Andrew M. Bruckner, Differentiation of Real Functions, 2nd edition, CRM Monograph Series #5, American Mathematical Society, 1994, xii + 195 pages. [The first edition was published in 1978 as Springer-Verlag's Lecture Notes in Mathematics #659. The second edition is essentially unchanged from the first edition with the exception of a new chapter on recent developments (23 pages) and 94 additional bibliographic items.] [MR 94m:26001; Zbl 796.26001]



[3] Andrew M. Bruckner and John L. Leonard, Derivatives, American Mathematical Monthly 73 #4 (April 1966) [Part II: Papers in Analysis, Herbert Ellsworth Slaught Memorial Papers #11], 24-56. [MR 33 #5797; Zbl 138.27805]



[4] Andrew M. Bruckner and György Petruska, Some typical results on bounded Baire $1$ functions, Acta Mathematica Hungarica 43 (1984), 325-333. [MR 85h:26004; Zbl 542.26004]




[5] Casper Goffman, Real Functions, Prindle, Weber & Schmidt, 1953/1967, x + 261 pages. [MR 14,855e; Zbl 53.22502]



[6] Bernd Kirchheim, Some further typical results on bounded Baire one functions, Acta Mathematica Hungarica 62 (1993), 119-129. [94k:26008; Zbl 786.26002]



[7] John Clayton Klippert and Geoffrey Williams, On the existence of a derivative continuous on a $G_{\delta}$, International Journal of Mathematical Education in Science and Technology 35 (2004), 91-99.



[8] Clifford Weil, The space of bounded derivatives, Real Analysis Exchange 3 (1977-78), 38-41. [Zbl 377.26005]


modular arithmetic - What is $26^{32}bmod 12$?


What is the correct answer to this expression:


$26^{32} \pmod {12}$


When I tried in Wolfram Alpha the answer is $4$, this is also my answer using Fermat's little theorem, but in a calculator the answer is different, $0.$


Answer



First, note that $26 \equiv 2 \pmod {12}$, so $26^{32} \equiv 2^{32} \pmod {12}$.


Next, note that $2^4 \equiv 16 \equiv 4 \pmod {12}$, so $2^{32} \equiv \left(2^4\right)^8 \equiv 4 ^8 \pmod {12}$, and $4^2 \equiv 4 \pmod {12}$.


Finally, $4^8 \equiv \left(4^2\right)^4 \equiv 4^4 \equiv \left(4^2\right)^2 \equiv 4^2 \equiv 4 \pmod {12}$.


Then we get the result.


There are slicker solutions with just a few results from elementary number theory, but this is a very basic method which should be easy enough to follow.



Monday, October 28, 2019

real analysis - How do I evaluate this limit :$displaystyle lim_{xto infty} (1+cos x)^frac{1}{cos x}$?



I would like to know if this :$$ \lim_{x\to \infty} (1+\cos x)^\frac{1}{\cos x}$$ does exist and how do i evaluate it ?.




Note : I have tried to use the standard limit : $$ \lim_{z\to \infty} \left(1+\frac{1}{z}\right)^z=e$$ using $\cos x=1/z $ but i can't
succeed



Thank you for any help .


Answer



For any natural number $n$, if $x=2n\pi$ ,the value of the function is 2; if $x=(2n+1/2)\pi$, the value is 1. So no convergence.


complex numbers - $1/i=i$. I must be wrong but why?




$$\frac{1}{i} = \frac{1}{\sqrt{-1}} = \frac{\sqrt{1}}{\sqrt{-1}} = \sqrt{\frac{1}{-1}} = \sqrt{-1} = i$$




I know this is wrong, but why? I often see people making simplifications such as $\frac{\sqrt{2}}{2} = \frac{1}{\sqrt{2}}$, and I would calculate such a simplification in the manner shown above, namely



$$\frac{\sqrt{2}}{2} = \frac{\sqrt{2}}{\sqrt{4}} = \sqrt{\frac{2}{4}} = \frac{1}{\sqrt{2}}$$


Answer



What you are doing is a version of
$$
-1=i^2=\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)}=\sqrt1=1.
$$
It simply shows that for non-positive numbers, it is not always true that $\sqrt{ab}=\sqrt{a}\sqrt{b}$.


sequences and series - Why does $limlimits_{ntoinfty} e^{-n}sum_{i=1}^{n}frac{n^i}{i!} = frac{1}{2}$ and not 1?





The limit
$$\lim_{n\to\infty} e^{-n}\sum_{i=1}^{n}\frac{n^i}{i!}$$
can be seen to be $\frac{1}{2}$, yet isn't the sum in this expression just going to be $\lim\limits_{n\to\infty}e^{n}$, making the limit 1?



I'm having trouble wrapping my head around why this isn't the case. Any help would be much appreciated.



Answer




  1. The problem with your reasoning is that the two terms, $e^{-n}$ and $\sum_{i=1}^n \frac{n^i}{i!}$, can't be analyzed separately. Notice that $e^{-n}$ approaches $0$, and the second term approaches $\infty$, so the limit of the product would be $\boldsymbol{0 \cdot \infty}$, an indeterminate form. A limit of the form $0 \cdot \infty$ might equal any real number, or might even equal $\infty$.


  2. It may be instructive to consider a different expression where some $n$s are replaced by $m$. The following limit can be evaluated as you say (I also made the sum start from $i = 0$ for simplicity):
    $$
    \lim_{n \to \infty} e^{-\color{red}{m}} \sum_{i=0}^{\color{blue}{n}} \frac{{\color{red}{m}}^i}{i!} = 1,
    $$

    because it is the product of limits, $e^{-m} \cdot e^m = 1$. And if we instead take the limit as $m \to \infty$, then we get
    $$
    \lim_{m \to \infty} e^{-m} \sum_{i=0}^n \frac{m^i}{i!} = 0,

    $$

    because the exponential beats the polynomial, and goes to $0$. In your problem, essentially, $m$ and $n$ are both going to $\infty$ at the same time, so we might imagine that the two possible results ($0$ and $1$) are "competing"; we don't know which one will win (and it turns out that the result is $\frac12$, somewhere in the middle).


  3. How can we show that your limit is $\frac12$? This is a difficult result; please take a look at this question for several proofs (thanks to TheSilverDoe for posting).



    In that question, the summation starts from $i=0$ instead of $i=1$. However, note that we can add $e^{-n}$ to your limit and it will not change (since $\lim_{n \to \infty} e^{-n} = 0$). So this gives
    $$
    \lim_{n \to \infty} \left(\left( e^{-n} \sum_{i=1}^n \frac{n^i}{i!} \right) + e^{-n} \right)
    = \lim_{n \to \infty} e^{-n} \sum_{i=\color{red}{0}}^n \frac{n^i}{i!}.
    $$




Sunday, October 27, 2019

integration - How to evaluate the following integral involving a gaussian?



I want to evaluate the following integral:



$$\int\limits_0 ^\infty {x \sin{px} \exp{(-a^2x^2})} dx$$



Now I am unsure how to proceed. I know that this is an even function so I can extend the limit terms to $-\infty, \infty $ and then divide by 2. I have tried to evaluate this on Wolfram Alpha, but it only shows the answer while I am interested in the procedure.


Answer




Start with:
$$I\left( p \right)=\int_{0}^{\infty }{\cos \left( px \right)\exp (-{{a}^{2}}{{x}^{2}})dx}$$
We can use differentiation under the integral sign:



$${I}'\left( p \right)=-\int_{0}^{\infty }{x\sin \left( px \right)\exp (-{{a}^{2}}{{x}^{2}})dx}$$
Integration by parts using $u=\sin \left( px \right)\quad and\quad dv=-x\exp \left( -{{a}^{2}}{{x}^{2}} \right)dx$
$${I}'\left( p \right)=\left. \sin \left( px \right)\frac{\exp \left( -{{a}^{2}}{{x}^{2}} \right)}{2{{a}^{2}}} \right|_{0}^{\infty }-\frac{p}{2{{a}^{2}}}\int_{0}^{\infty }{\cos \left( px \right)\exp \left( -{{a}^{2}}{{x}^{2}} \right)dx}$$
The first term on the right vanishes, and we have the first-order differential equation:
$$\frac{{I}'\left( p \right)}{I\left( p \right)}=-\frac{p}{2{{a}^{2}}}\Rightarrow \ln \left( I\left( p \right) \right)=-\frac{{{p}^{2}}}{4{{a}^{2}}}+C$$
Using $$I\left( 0 \right)=\frac{\sqrt{\pi }}{a}$$

We can find $C=\ln \left( \frac{\sqrt{\pi }}{a} \right)$
hence
$$\ln \left( I\left( p \right) \right)=-\frac{{{p}^{2}}}{4{{a}^{2}}}+\ln \left( \frac{\sqrt{\pi }}{a} \right)$$
So
$$I\left( p \right)=\frac{\sqrt{\pi }}{a}\exp \left( -\frac{{{p}^{2}}}{4{{a}^{2}}} \right)$$
Finally the integral in question equals
$$-{I}'\left( p \right)=-\frac{d}{dp}\left( \frac{\sqrt{\pi }}{a}\exp \left( -\frac{{{p}^{2}}}{4{{a}^{2}}} \right) \right)$$


linear algebra - Can we prove that matrix multiplication by its inverse is commutative?

We know that $AA^{-1} = I$ and $A^{-1}A = I$, but is there a proof for the commutative property here? Or is this just the definition of invertibility?

modular arithmetic - Solving linear equivalence mod $26$

I am looking to solve for $x$ in this modular arithmetic problem. We haven't learned anything remotely as complex as this in my class so not exactly sure where to start.



$$y \equiv 5x + 25 \pmod{26}$$



Here's what I know so far.



The additive inverse of 25 in mod 26 is 1 since: $1 + 25 \equiv 0 \pmod{26}$. So, I added 1 to both sides of the congruence.




The multiplicative inverse of 5 in mod 26 is −5 since: $5(−5) \equiv −25 \equiv 1 \pmod{26}$. So, I multiplied by -5 on both sides of the congruence.



After doing those steps this is what I have come up with



$$-5 - 5y \equiv ( -25x - 130) \pmod{26} \tag 1$$



Then I add the 5 over to the other side of the equation to get



$$-5y \equiv -25x - 125 \pmod{26} \tag 2$$




Then I divided by a factor of 5 to each of the numbers



$$-y \equiv -5x - 25 \pmod{26} \tag 3$$



Now I feel stuck and like I'm just back at the start... hmm is this even correct? Let me know where I went wrong or what I need to continue doing! Thanks in advance

Saturday, October 26, 2019

calculus - The following is a Taylor Series evaluated a particular value of x, find the sum of the series.


This is the Taylor Series in question 1 + $\frac{2}{1!}$+$\frac{4}{2!}$+$\frac{8}{3!}$+...+$\frac{2^n}{n!}$+...


I know how to find whether or not the series converges or diverges easily using the ratio test. I find that it converges. However, I don't know exactly how to find what value it converges to because the limit that you find using the ratio test doesn't yield the actual value that you converge to correct?


Answer



That is correct, the ratio test does not give a value for the series. If you want a value for the series, take a look at the power series of $e^x$


calculus - Any proof that verify why the limit of the difference is the difference of the limits?



I did a research on internet and books about why the difference of the limits is the difference of the limits, but i didn't get any result of this proof. I would appreciate if somebody can help me. Thanks. :)


Answer



Consider $f(x)$ and $g(x)$ are two functions on real domain whose limits exist for $x=a$.



Given: $\lim_{x \to a} f(x)$ = $K$ and $\lim_{x \to a} g(x)$ = $L$



Let $\varepsilon$ > 0 then there exist $\alpha$ > $0$ and $\beta$ > $0$ such that ,




$|f(x)-K |$< $\varepsilon$/2 ,whenever $0$<|$x$-$a$|<$\alpha$ ,and
$|g(x)-L |$< $\varepsilon$/2 , whenever $0$<|$x$-$a$|<$\beta$



Choose $\gamma$= $min${$\alpha$,$\beta$}



Now we need to show that
$|f(x)+g(x)-(K+L)|$

Assume that we have $0$<$|x-a|$<$\gamma$. Then we have,




$|f(x)+g(x)-(K+L)|$=$|(f(x)-K)+(g(x)-L)|$ < $|(f(x)-K)|$ + $|(g(x)-L)|$ < $\varepsilon$



In our third step we used the fact that, by our choice of $\gamma$, we also have $0$<|$x$-$a$|

So we can use initial statements in our proof.



Now replace $g(x)$ by $(-1)g(x)$ and you will get your proof.


probability - How do wer find this expected value?




I'm just a little confused on this. I'm pretty sure that I need to use indicators for this but I'm not sure how I find the probability. The question goes like this:




A company puts five different types of prizes into their cereal boxes, one in each box and in equal proportions. If a customer decides to collect all five prizes, what is the expected number of boxes of cereals that he or she should buy?




I have seen something like this before and I feel that I'm close, I'm just stuck on the probability. So far I have said that $$X_i=\begin{cases}1 & \text{if the $i^{th}$ box contains a new prize}\\ 0 & \text{if no new prize is obtained} \end{cases}$$ I know that the probability of a new prize after the first box is $\frac45$ (because obviously the person would get a new prize with the first box) and then the probability of a new prize after the second prize is obtained is $\frac35$, and so on and so forth until the fifth prize is obtained. What am I doing wrong?! Or "what am I missing?!" would be the more appropriate question.


Answer



As the expected number of tries to obtain a success with probability $p$ is $\frac{1}{p}$, you get the expect number :

$$1+\frac{5}{4}+\frac{5}{3}+\frac{5}{2}+5=\frac{12+15+20+30+60}{12}=\frac{137}{12}\approx 11.41$$


Friday, October 25, 2019

radicals - How to prove: if $a,b in mathbb N$, then $a^{1/b}$ is an integer or an irrational number?


It is well known that $\sqrt{2}$ is irrational, and by modifying the proof (replacing 'even' with 'divisible by $3$'), one can prove that $\sqrt{3}$ is irrational, as well. On the other hand, clearly $\sqrt{n^2} = n$ for any positive integer $n$. It seems that any positive integer has a square root that is either an integer or irrational number.



  1. How do we prove that if $a \in \mathbb N$, then $\sqrt a$ is an integer or an irrational number?


I also notice that I can modify the proof that $\sqrt{2}$ is irrational to prove that $\sqrt[3]{2}, \sqrt[4]{2}, \cdots$ are all irrational. This suggests we can extend the previous result to other radicals.



  1. Can we extend 1? That is, can we show that for any $a, b \in \mathbb{N}$, $a^{1/b}$ is either an integer or irrational?


Answer




These (standard) results are discussed in detail in


http://math.uga.edu/~pete/4400irrationals.pdf


This is the second handout for a first course in number theory at the advanced undergraduate level. Three different proofs are discussed:


1) A generalization of the proof of irrationality of $\sqrt{2}$, using the decomposition of any positive integer into a perfect $k$th power times a $k$th power-free integer, followed by Euclid's Lemma. (For some reason, I don't give all the details of this proof. Maybe I should...)


2) A proof using the functions $\operatorname{ord}_p$, very much along the lines of the one Carl Mummert mentions in his answer.


3) A proof by establishing that the ring of integers is integrally closed. This is done directly from unique factorization, but afterwards I mention that it is a special case of the Rational Roots Theorem.


Let me also remark that every proof I have ever seen of this fact uses the Fundamental Theorem of Arithmetic (existence and uniqueness of prime factorizations) in some form. [Edit: I have now seen Robin Chapman's answer to the question, so this is no longer quite true.] However, if you want to prove any particular case of the result, you can use a brute force case-by-case analysis that avoids FTA.


To test the convergence of series 1



To test the convergence of the following series:




  1. $\displaystyle \frac{2}{3\cdot4}+\frac{2\cdot4}{3\cdot5\cdot6}+\frac{2\cdot4\cdot6}{3\cdot5\cdot7\cdot8}+...\infty $


  2. $\displaystyle 1+ \frac{1^2\cdot2^2}{1\cdot3\cdot5}+\frac{1^2\cdot2^2\cdot3^2}{1\cdot3\cdot5\cdot7\cdot9}+ ...\infty $



  3. $\displaystyle \frac{4}{18}+\frac{4\cdot12}{18\cdot27}+\frac{4\cdot12\cdot20}{18\cdot27\cdot36} ...\infty $




I cannot figure out the general $u_n$ term for these series(before I do any comparison/ratio test).



Any hints for these?


Answer




I cannot figure out the general $u_n$ term for these series(before I do any comparison/ratio test).





For the first series, one can start from the fact that, for every $n\geqslant1$, $$u_n=\frac{2\cdot4\cdots (2n)}{3\cdot5\cdots(2n+1)}\cdot\frac1{2n+2}=\frac{(2\cdot4\cdots (2n))^2}{2\cdot3\cdot4\cdot5\cdots(2n)\cdot(2n+1)}\cdot\frac1{2n+2},$$ that is, $$u_n=\frac{(2^n\,n!)^2}{(2n+1)!}\cdot\frac1{2n+2}=\frac{4^n\,(n!)^2}{(2n+2)!}.$$ Similar approaches yield the two other cases.


analysis - Convergence of Sequence with factorial

I want to show that $$ a_n = \frac{3^n}{n!} $$ converges to zero. I tried Stirlings formulae, by it the fraction becomes $$ \frac{3^n}{\sqrt{2\pi n} (n^n/e^n)} $$ which equals $$ \frac{1}{\sqrt{2\pi n}} \left( \frac{3e}{n} \right)^n $$ from this can I conclude that it goes to zero because $\frac{3e}{n}$ and $\frac{1}{\sqrt{2\pi n}}$ approaching zero?

algebra precalculus - 'Plus' Operator analog of the factorial function?







Is there a similar function for the addition operator as there is the factorial function for the multiplication operator?



For factorials it is 5! = 5*4*3*2*1, is there a function that would do 5+4+3+2+1?




Thanks,

Thursday, October 24, 2019

polynomials - Question about quartic equation having all 4 real roots


I would appreciate if somebody could help me with the following problem.I am not good at quartic equations,so could not attempt much.


Q:The number of integral values of $p$ for which the equation $x^4+4x^3-8x^2+p=0$ has all 4 real roots.


Let $\alpha,\beta,\gamma,\delta $ are four real roots.
According to Vieta's formula
$\alpha+\beta+\gamma+\delta=-4$
$\alpha\beta+\alpha\gamma+\alpha\delta+\beta\gamma+\beta\delta+\gamma\delta=-8$
$\alpha\beta\gamma+\alpha\beta\delta+\alpha\gamma\delta+\beta\gamma\delta=0$
$\alpha\beta\gamma\delta=p$


then i got stuck..what to do?


Thanks in advance.


Answer



For a simple approach consider the function $y=x^4+4x^3-8x^2=x^2\cdot\left((x+2)^2-12\right)$ - the intersections with the line $y=-p$ will give the roots of the original. Since this is just a horizontal line in the normal $x,y$ plane, a quick sketch will show that the number of real roots is governed by the relationship of $p$ to the local minima/maxima of the quartic.



The form of the quartic makes this easy to sketch - and the double root at $x=0$ means the cubic you get on differentiating has an obvious root, leaving a quadratic to factor.


elementary number theory - Prove that $sqrt 3$ is irrational

I have to prove that $\sqrt 3$ is irrational.
let us assume that $\sqrt 3$ is rational. This means for some distinct integers $p$ and $q$ having no common factor other than 1,



$$\frac{p}{q} = \sqrt3$$




$$\Rightarrow \frac{p^2}{q^2} = 3$$



$$\Rightarrow p^2 = 3 q^2$$



This means that 3 divides $p^2$. This means that 3 divides $p$ (because every factor must appear twice for the square to exist). So we have, $p = 3 r$ for some integer $r$. Extending the argument to $q$, we discover that they have a common factor of 3, which is a contradiction.



Is this proof correct?

calculus - Find $lim_{ntoinfty} frac{(n!)^{1/n}}{n}$.






Find $$\lim_{n\to\infty} \frac{(n!)^{1/n}}{n}.$$




I don't know how to start. Hints are also appreciated.


Answer




let $$y= \frac{(n!)^{1/n}}{n}.$$
$$\implies y=\left (\frac{n(n-1)(n-2)....3.2.1}{n.n.n....nnn}\right)^\frac{1}{n} $$
Note that we can distribute the $n$ in the denominator and give an $'n'$ to each term
$$\implies \log y= \frac {1}{n}\left(\log\frac{1}{n}+\log\frac{2}{n}+...+\log\frac{n}{n}\right)$$
applying $\lim_{n\to\infty}$ on both sides, we find that R.H.S is of the form



$$ \lim_{n\to\infty} \frac{1}{n} \sum_{r=0}^{r=n}f \left(\frac{r}{n}\right)$$
which can be evaluated by integration $$=\int_0^1\log(x)dx$$
$$=x\log x-x$$ plugging in the limits (carefully here) we get $-1$.




$$ \log y=-1,\implies y=\frac{1}{e}$$


elementary number theory - Calculating $12^{20} bmod(41)$ by hand




Hi I'm practicing the Pohlig-Helman algorthm right now and I was wondering if I could get an explanation on how to easily compute something like



$12^{20} \bmod(41)$ by hand



I can't think of a smart way to do it, I won't be allowed a calculator on an exam. So any help would be greatly appreciated.


Answer



\begin{align}12^{20} &\equiv 3^{20}2^{40} \pmod{41}\\
&\equiv 3^{20} \pmod{41} \text{, Fermat's}\\
&= (3^4)^5 \pmod{41} \\

&\equiv (-1)^5 \pmod{41} , \text{since } 81 = 2(41)-1\\
&\equiv -1 \pmod{41}\end{align}


Wednesday, October 23, 2019

real analysis - Convergence of the series $sum limits_{n=2}^{infty} frac{1}{nlog^s n}$



We all know that $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^s}$ converges for $s>1$ and diverges for $s \leq 1$ (Assume $s \in \mathbb{R}$).



I was curious to see till what extent I can push the denominator so that it will still diverges.



So I took $\displaystyle \sum_{n=2}^{\infty} \frac{1}{n\log n}$ and found that it still diverges. (This can be checked by using the well known test that if we have a monotone decreasing sequence, then $\displaystyle \sum_{n=2}^{\infty} a_n$ converges iff $\displaystyle \sum_{n=2}^{\infty} 2^na_{2^n}$ converges).




No surprises here. I expected it to diverge since $\log n$ grows slowly than any power of $n$.



However, when I take $\displaystyle \sum_{n=2}^{\infty} \frac{1}{n(\log n)^s}$, I find that it converges $\forall s>1$.



(By the same argument as previous).



This doesn't make sense to me though.



If this were to converge, then I should be able to find a $s_1 > 1$ such that




$\displaystyle \sum_{n=2}^{\infty} \frac{1}{n^{s_1}}$ is greater than $\displaystyle \sum_{n=2}^{\infty} \frac{1}{n (\log n)^s}$



Doesn't this mean that in some sense $\log n$ grows faster than a power of $n$?



(or)



How should I make sense of (or) interpret this result?



(I am assuming that my convergence and divergence conclusions are right).


Answer




Yes $\displaystyle \sum_{n=2}^{\infty} \dfrac{1}{n (\log n)^s}$ is convergent if $\displaystyle s > 1$, we can see that by comparing with the corresponding integral.



As to your other question, if



$\displaystyle \sum_{n=2}^{\infty} \frac{1}{n^{s_1}}$ is greater than $\displaystyle \sum_{n=2}^{\infty} \frac{1}{n (\log n)^s}$



does not imply $\log n$ grows faster than a power of $\displaystyle n$. You cannot compare them term by term.



What happens is that the first "few" terms of the series dominate (the remainder goes to 0). For a small enough $\displaystyle \epsilon$, we have that $\log n > n^{\epsilon}$ for a sufficient number of (initial) terms, enough for the series without $\log n$ to dominate the other.


exponential function - The integral $int_0^infty e^{-t^2}dt$


Me and my highschool teacher have argued about the limit for quite a long time.


We have easily reached the conclusion that integral from $0$ to $x$ of $e^{-t^2}dt$ has a limit somewhere between $0$ and $\pi/2$, as we used a little trick, precisely the inequality $e^t>t+1$ for every real $x$. Replacing $t$ with $t^2$, inversing, and integrating from $0$ to $x$, gives a beautiful $\tan^{-1}$ and $\pi/2$ comes naturally.


Next, the limit seemed impossible to find. One week later, after some google searches, i have found what the limit is. This usually spoils the thrill of a problem, but in this case it only added to the curiosity. My teacher then explained that modern approaches, like a computerised approximation, might have been applied to find the limit, since the erf is not elementary. I have argued that the result was to beautiful to be only the result of computer brute force.


After a really vague introduction to fourier series that he provided, i understood that fourier kind of generalised the first inequality, the one i have used to get the bounds for the integral, with more terms of higher powers.


To be on point: I wish to find a simple proof of the result that the limit is indeed $\sqrt\pi/2$, using the same concepts I am familiar with. I do not know what really Fourier does, but i am open to any new information.


Thank you for your time, i appreciate it a lot. I am also sorry for not using proper mathematical symbols, since I am using the app.


Answer



It's useless outside of this one specific integral (and its obvious variants), but here's a trick due to Poisson: \begin{align*} \left(\int_{-\infty}^\infty dx\; e^{-x^2}\right)^2 &= \int_{-\infty}^\infty \int_{-\infty}^\infty \;dx\;dy\; e^{-x^2}e^{-y^2} \\ &= \int_{-\infty}^\infty \int_{-\infty}^\infty \;dx\;dy\; e^{-(x^2 + y^2)} \\ &= \int_0^{2\pi} \!\!\int_0^\infty \;r\,dr\;d\theta\; e^{-r^2} \\ &= \pi e^{-r^2}\Big\vert_{r=0}^\infty \\ &= \pi, \end{align*} switching to polar coordinates halfway through. Thus the given integral is $\frac{1}{2}\sqrt{\pi}$.



integration - Is there way to integrate this function other than numerically?



I am wondering if there is a way to integrate the function $$\int_{0}^∞ \frac{\tan^{-1}(\pi{x})-\tan^{-1}(x)}{e^{x^2}}$$ without resorting to Simpson's rule/numerical integration. My first thought was to split it into two integrals and use integration by parts on each, but that did not work. Each step integrating by parts just convoluted it more and more.



Depending how you integrate by parts, you either get the chain of convoluted integrals I did, or immediately wind up with the Gauss Error Function $\text{erf}(x)$.



I'm self-taught and still relatively new, so please forgive me if I missed something simple. Anyone know what to do here?


Answer




This is a real monster ... but a CAS gave a result I am sure you will enjoy !



If
$$I_a=\int_0^\infty e^{-x^2} \tan ^{-1}(a x)\,dx$$ then, assuming that $a$ is a real positive number,
$$I_a=\frac{1}{4} \sqrt{\pi } \left(\text{erfi}\left(\frac{1}{a}\right) (\gamma -2 \log
(a))+\pi \right)+$$
$$\frac{\text{HypergeometricPFQ}^{(\{0,0\},\{0,1\},0)}\left(\left\{\frac{1}{2},1\right\},\left\{\frac{3}{2},1\right\},\frac{1}{a^2}\right)-2 \,
_2F_2\left(\frac{1}{2},\frac{1}{2};\frac{3}{2},\frac{3}{2};\frac{1}{a^2}\right)}
{2 a} $$
where appear hypergeometric functions and derivatives.



Its decimal representation of $(I_\pi-I_1)$ is $0.372711790424249$



integration - Closed form for $ int_0^infty {frac{{{x^n}}}{{1 + {x^m}}}dx }$



I've been looking at



$$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$



It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example:



$$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$




$$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$



$$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$



So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas.






UPDATE:




The integral reduces to finding



$$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$



With $a =\dfrac{n+1}{m}$ which converges only if



$$0 < a < 1$$



Using series I find the solution is





$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$




Can this be put it terms of the Digamma Function or something of the sort?


Answer



I would like to make a supplementary calculation on BR's answer.



Let us first assume that $0 < \mu < \nu$ so that the integral
$$ \int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx $$

converges absolutely. By the substitution $x = \tan^{2/\nu} \theta$, we have
$$ \frac{dx}{1+x^{\nu}} = \frac{2}{\nu} \tan^{(2/\nu)-1} \theta \; d\theta. $$
Thus
$$ \begin{align*}
\int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx
& = \int_{0}^{\frac{\pi}{2}} \frac{2}{\nu} \tan^{\frac{2\mu}{\nu}-1} \theta \; d\theta \\
& = \frac{1}{\nu} \beta \left( \frac{\mu}{\nu}, 1 - \frac{\mu}{\nu} \right) \\
& = \frac{1}{\nu} \Gamma \left( \frac{\mu}{\nu} \right) \Gamma \left( 1 - \frac{\mu}{\nu} \right) \\
& = \frac{\pi}{\nu} \csc \left( \frac{\pi \mu}{\nu} \right),
\end{align*} $$

where the last equality follows from Euler reflexion formula.


abstract algebra - Finding intermediate subfields of an extension




Consider the Galois extension $\mathbb{Q}(\sqrt{p_1},...,\sqrt{p_n})\vert\mathbb{Q}$ where $p_1,...,p_n$ are distinct prime numbers.
Find all the intermediate subfields $K$ such that $[K:\mathbb{Q}]=2$. I know that:



1) $\mathbb{Q}(\sqrt{p_1},...,\sqrt{p_n})$ is the splitting field of $f(x)= (x^2-p_1)...(x^2-p_n)$



2) $[\mathbb{Q}(\sqrt{p_1},...,\sqrt{p_n}):\mathbb{Q}]= 2^n $



3) Since $\sqrt {p_i}\notin\mathbb{Q}(\sqrt{p_1},...,\sqrt{p_{i-1}})$ we have that




$[(\mathbb{Q}(\sqrt{p_1},..,\sqrt{p_{i}}):\mathbb{Q}(\sqrt{p_1},...,\sqrt{p_{i-1}})]=2$



4) By Galois Correspondence the subfields with degree 2 over $\mathbb{Q}$ corresponds to subgroups of index 2 of the Galois group(that has order $2^n$),that are subgroups of order $2^{n-1}$.



I am not seeing how can I find and write these subgroups.



PS : I did a numerical example with $\mathbb{Q}(\sqrt{2},\sqrt{3},\sqrt{5})$ in this case I found that the intermediate subfields of degree 2 are of the form $\mathbb{Q}(\sqrt{q})$ where $q$ is a element (not 1) from the basis of $ \mathbb{Q}(\sqrt{2},\sqrt{3},\sqrt{5})$ over $\mathbb{Q}$



Thanks in advance


Answer




I think that the quickest proof comes from linear algebra. I give all the details. You know that your Galois group $G=Gal(K/\mathbf Q)$ has order $2^n$. Any $s\in G$ is determined by its action on the roots $\sqrt p_i$, and since $s(p_i)=p_i$, necessarily $s(\sqrt p_i)=\pm \sqrt p_i$, which means that any $s$ has order $2$, and so $G$ is abelian, isomorphic (in additive notation) to $(\mathbf Z/2\mathbf Z)^n$. In other words, $G$ may be viewed as a vector space of dimension $n$ over the field $\mathbf F_2$ with $2$ elements. A basis of $G$ consists of the $s_j$ defined by $s_j(\sqrt p_i)/ \sqrt p_i = \delta_{ij}$ (Kronecker's symbol). By the Galois correspondance, you are looking for all the subgroups $H$ of $G$ of index $2$. In terms of linear algebra, $H$ is a hyperplane of $G$, or equivalently, $H$ is the kernel of a linear form $f:G\to \mathbf F_2$. In general, two linear forms with the same kernel are proportional, but here, because the base field is $\mathbf F_2$, they must coincide. In other words you are simply looking for the dual $\hat G$ of the vector space $G$, which has also dimension $n$. Actually, a dual basis consists of the linear forms $f_i$ determined by $f_i(s_j)=\delta_{ij}$. Since the linear forms $\pi_i$ defined by $\pi_i(s_j)=s_j(\sqrt p_i)/ \sqrt p_i$ share the same property, $\hat G$ can be identified with the subspace $R$ of $\mathbf Q^*/{\mathbf Q^*}^2$ generated by the classes $[p_i]$ of $p_i$ mod ${\mathbf Q^*}^2$, usually called the "Kummer radical" of $K$. The duality above is then presented as a non degenerate pairing $G\times R\to\mathbf F_2, (s,[a])\to s(\sqrt a)/\sqrt a$, and the fixator of $\mathbf Q(\sqrt a)$ is the hyperplane orthogonal to $[a]$, as pointed out by @Robert Shore.



NB: If you know about Kummer theory, see https://math.stackexchange.com/a/1609061/300700, where everything (including your beginning) can be given a unified proof.


Tuesday, October 22, 2019

sequences and series - Why does $limlimits_{ntoinfty} e^{-n}sum_{i=1}^{n}frac{n^i}{i!} = frac{1}{2}$ and not 1?


The limit $$\lim_{n\to\infty} e^{-n}\sum_{i=1}^{n}\frac{n^i}{i!}$$ can be seen to be $\frac{1}{2}$, yet isn't the sum in this expression just going to be $\lim\limits_{n\to\infty}e^{n}$, making the limit 1?


I'm having trouble wrapping my head around why this isn't the case. Any help would be much appreciated.


Answer




  1. The problem with your reasoning is that the two terms, $e^{-n}$ and $\sum_{i=1}^n \frac{n^i}{i!}$, can't be analyzed separately. Notice that $e^{-n}$ approaches $0$, and the second term approaches $\infty$, so the limit of the product would be $\boldsymbol{0 \cdot \infty}$, an indeterminate form. A limit of the form $0 \cdot \infty$ might equal any real number, or might even equal $\infty$.




  2. It may be instructive to consider a different expression where some $n$s are replaced by $m$. The following limit can be evaluated as you say (I also made the sum start from $i = 0$ for simplicity): $$ \lim_{n \to \infty} e^{-\color{red}{m}} \sum_{i=0}^{\color{blue}{n}} \frac{{\color{red}{m}}^i}{i!} = 1, $$ because it is the product of limits, $e^{-m} \cdot e^m = 1$. And if we instead take the limit as $m \to \infty$, then we get $$ \lim_{m \to \infty} e^{-m} \sum_{i=0}^n \frac{m^i}{i!} = 0, $$ because the exponential beats the polynomial, and goes to $0$. In your problem, essentially, $m$ and $n$ are both going to $\infty$ at the same time, so we might imagine that the two possible results ($0$ and $1$) are "competing"; we don't know which one will win (and it turns out that the result is $\frac12$, somewhere in the middle).





  3. How can we show that your limit is $\frac12$? This is a difficult result; please take a look at this question for several proofs (thanks to TheSilverDoe for posting).


    In that question, the summation starts from $i=0$ instead of $i=1$. However, note that we can add $e^{-n}$ to your limit and it will not change (since $\lim_{n \to \infty} e^{-n} = 0$). So this gives $$ \lim_{n \to \infty} \left(\left( e^{-n} \sum_{i=1}^n \frac{n^i}{i!} \right) + e^{-n} \right) = \lim_{n \to \infty} e^{-n} \sum_{i=\color{red}{0}}^n \frac{n^i}{i!}. $$



Monday, October 21, 2019

Summation and Arithmetic progression problem




This is the question which I am referring to




If $S_n=an^2 + bn $ , verify that the series $\sum {t_{n}}$ is arithmetic where $ S_n=\sum_{n=1}^{n}{t_n} $




My try:





  • first of all I used below equation to calculate ${t_n}$



${t_n} = S_n - S_{n-1} = a (2n-1)-b$




  • Then I calculated common difference by below equation



    $d=t_n-t_{n-1}$


  • Then I calculated $t_1$ by putting $n=1$.


  • Then I calculated $s_n$ by AP formula and it came out to be
    $s_n= an^2-bn$ but for $\sum {t_{n}}$ to be arithmetic $S_n=s_n$



Please mention where am I wrong


Answer



$t_n=a\{n^2-(n-1)^2\}+b\{n-(n-1)\}=2na-a+b$



$$t_n-t_{n-1}=2na-a+b-\{2(n-1)a-a+b\}=2a$$ which being independent of $n$ is constant


calculus - Proof by induction for recursive sequence with no explicit formula.





The problem I am trying to solve is: "show that the sequence defined by $a_1=6$ and $a_{n+1}=\sqrt{6+a_n}$ for $n\ge 1$ is convergent, and find the limit."




So I know that I need to use proof by induction to show that the sequence is decreasing, and then show that it has a greatest lower bound of $3$. And then by the Monotone convergence theorem I know it converges to $3$.



I tried to find an explicit formula for the sequence but I was unsuccessful. So my problem is that I don't know how to use induction on a non-explicit defined recursive sequence.


Answer



First we prove by induction that $a_n>3$. It's true for $n=1$. Assuming $a_n>3$, we know $a_n+6>3^2$ so $\sqrt{a_n+6}>3$ or $a_{n+1}>3$. Thus we established the lower bound $3$.



Now we see that $x^2-x-6$ is a strictly increasing polynomial for $x>3$, and has a root at $x=3$, thus, $x^2-x-6>0$ for $x>3$: We see that $a_n^2-a_n-6>0$, which we can rewrite to $\sqrt{a_n+6}


Now realize that $\sqrt{x+6}$ is continuous, so that, when setting $\lim a_n=L$, we know: $$L=\lim a_{n+1}=\lim \sqrt{a_n+6}=\sqrt{(\lim a_n)+6}=\sqrt{L+6}$$
Solving for $L$ yields $L\in\{-2,3\}$, and since a lower bound was $3$, we know $\lim a_n=3$.



Hope this helped!


combinatorics - Determine the number of strings that start and end with the same letter



I'm trying to figure out this question and this is my thought process right now. Any help would be appreciated because I feel like im doing something wrong.



Considering a string of 28 characters, how do we determine how many strings would start and end with the same letter (only lowercase) . If I dont consider the first and last character, then there are 26 characters left. Each character has the possibility of being one of the 26 possible letters in the alphabet. So is that 26 characters x 26 letters = 676. Then since there are 26 letters in the alphabet, then the first and last characters could be one of the 26 possible letters. So 676 x 26 = 17576, which is the answer?


Answer




Not quite; the number of $26$ letter strings is $26^{26}$, not $26\cdot 26$. This is because we have $26$ choices for the first letter, times $26$ choices for the second letter, times $26$ choices for the third letter, and so on and so forth. Perhaps trying a few small cases would help you understand the intuition. But you are correct about the second part; once you determine the middle string, you simply multiply by $26$ to account for the first and last letter.


calculus - Precise definition of limit on edge of function

Suppose we are to follow the definition of limit of epsilon delta.


How can we define a limit at the end of it interval?


for example


$f(x)=\sqrt{x}, x\in[0,\infty)$


$\lim_{x\to 0}f(x)$


do we say that the limit does not exist at $x\to 0$? since the very existential of $\delta$ would be impossible as the function would not be defined at :


$0-\delta$

Sunday, October 20, 2019

real analysis - Is intermediate value property equivalent to Darboux property?



I always thought that a function $f:\mathbb{R} \to \mathbb{R}$ has the intermediate value property (IVP) iff it maps every interval to an interval (Darboux property):




Proof:




Let $f$ have the Darboux property and let $a Then $f([a,b])$ is an interval and so contains $[f(a),f(b)]$. If $u
> \in [f(a),f(b)]$ then of course $u \in f([a,b])$ and thus there exists
some $k \in [a,b]$ such that $f(k) = u$, i.e. $f$ has IVP.



Now let $f$ have the IVP and let $a < b$, $x,y \in f([a,b])$ and $z

> \in \mathbb{R}$ with $x have $x = f(x')$ and $y = f(y')$ for some $x',y' \in [a,b]$. W.L.O.G
assume that $x' < y'$. Then by the IVP there is some $c\in [x', y']$
such that $f(c) = z$, i.e. $z \in f([a,b])$ and therefore $f([a,b])$
is an interval.




But on this blog in problem 5. the author says that they are not equivalent:





"This [Darboux property] is slightly different from continuity and
intermediate value property. Cotinuity implies Darboux and Darboux
implies Intermediate value property."




Have I missed something and if yes, where does the proof given above break?


Answer



That is correct according to the definition of intermediate value property saying that for all $a

The blog's author Beni BogoÅŸel clarified in a comment that he was using a different definition of intermediate value property, meaning that the entire image is an interval. The ambiguity is understandable given that the Intermediate Value Theorem is often stated in terms of a particular interval: $f:[a,b]\to \mathbb R$ continuous implies $f([a,b])$ contains the interval $\{\min\{f(a),f(b)\},\max\{f(a),f(b)\}\}$. (In this case, because the restriction of a continuous function is also continuous, this theorem automatically implies the stronger intermediate value property for continuous functions on intervals.)




The author acknowledged that the convention is not universal, and he may edit to clarify.


calculus - How to find ${limlimits_{xto 0^+} frac{1}{x} int_0^{x} sin(frac{1}{t})},mathrm dt$




$\def\d{\mathrm{d}}$There was a hint in the book, use intregation by parts in this way: $$\lim_{x\to 0^+} \frac{1}{x} \int_0^{x} \sin\frac{1}{t} \,\d t = \lim_{x\to 0^+} \frac{1}{x} \int_0^{x} t^2 \left(\frac{1}{t^2} \sin\frac{1}{t}\right)\,\d t.$$



When we integrate by parts we find this integral:



$$\int_0^{x} t\cos\frac{1}{t}\,\d t .$$



In every symbolic calculator it says is a special function and gives the value of the integral as $\mathrm{Ci}(x)$, but I'm using calculus 1 knowledge, any hint or help? Thanks in advance.


Answer



You don't need to compute the integral, you need only to show that the average is tending towards zero. Notice that




$$\left|\frac 1 x \int_0^x t \cos \frac 1 t \, dt\right| \le \frac 1 x \int_0^x t \, dt = \frac x 2 \to 0$$



as desired. Hence, the limit is zero.



P.S. Don't forget about the boundary terms when you integrate by parts. They're easy to deal with, but you still have to!






Also, just to be sure, here is a rough idea of why we could know the limit is zero in advance. The integral $\frac 1 x \int_0^x f(t) \, dt$ is the average of $f$ on $[0, x]$. If $f$ is oscillatory like $\sin 1 / t$ is, and spends an equal amount of time above and below the axis, the average is zero.



calculus - Prove the limit is $sqrt{e}$.





How do you show
$$\lim\limits_{k \rightarrow \infty} \frac{\left(2+\frac{1}{k}\right)^k}{2^k}=\sqrt{e}$$




I know that $$\lim\limits_{k \to \infty} \left(1+\frac{1}{k}\right)^k=e$$ but I don't know how to apply this.


Answer



Hint: $$\frac{a^k}{b^k}=\left(\frac ab\right)^k,$$ and in general, $$\lim_{k\to\infty}\left(1+\frac xk\right)^k=e^x.$$


real analysis - Infinite Series $sum_{n=1}^inftyfrac{H_n}{n^22^n}$



How can I prove that
$$\sum_{n=1}^{\infty}\frac{H_n}{n^2 2^n}=\zeta(3)-\frac{1}{2}\log(2)\zeta(2).$$
Can anyone help me please?


Answer



Let's start with the product of $\;-\ln(1-x)\,$ and $\dfrac 1{1-x}$ to get the product generating function
(for $|x|<1$) :
$$\tag{1}f(x):=-\frac {\ln(1-x)}{1-x}=\sum_{n=1}^\infty H_n\, x^n$$
Dividing by $x$ and integrating we get :

\begin{align}
\sum_{n=1}^\infty \frac{H_n}n\, x^n&=\int \frac{f(x)}xdx\\
&=-\int \frac{\ln(1-x)}{1-x}dx-\int\frac{\ln(1-x)}xdx\\
\tag{2}&=C+\frac 12\ln(1-x)^2+\operatorname{Li}_2(x)\\
\end{align}
(with $C=0$ from $x=0$)
The first integral was obtained by integration by parts, the second from the integral definition of the dilogarithm or the recurrence for the polylogarihm (with $\;\operatorname{Li}_1(x)=-\ln(1-x)$) : $$\tag{3}\operatorname{Li}_{s+1}(x)=\int\frac {\operatorname{Li}_{s}(x)}x dx$$



Dividing $(2)$ by $x$ and integrating again returns (using $(3)$ again) :
\begin{align}
\sum_{n=1}^\infty \frac{H_n}{n^2}\, x^n&=\int \frac {\ln(1-x)^2}{2\,x}dx+\int \frac{\operatorname{Li}_2(x)}x dx\\

&=C+I(x)+\operatorname{Li}_3(x)\\
\end{align}
with $I(x)$ obtained by integration by parts (since $\frac d{dx}\operatorname{Li}_2(1-x)=\dfrac {\ln(x)}{1-x}$) :
\begin{align}
I(x)&:=\int \frac {\ln(1-x)^2}{2\,x}dx\\
&=\left.\frac{\ln(1-x)^2\ln(x)}{2}\right|+\int \ln(1-x)\frac {\ln(x)}{1-x}dx\\
&=\left.\frac{\ln(1-x)^2\ln(x)}{2}+\ln(1-x)\operatorname{Li}_2(1-x)\right|+\int \frac{\operatorname{Li}_2(1-x)}{1-x}dx\\
&=\left.\frac{\ln(1-x)^2\ln(x)}{2}+\ln(1-x)\operatorname{Li}_2(1-x)-\operatorname{Li}_3(1-x)\right|\\
\end{align}
getting the general relation :

$$\tag{4}\sum_{n=1}^\infty \frac{H_n}{n^2}\, x^n=C+\frac{\ln(1-x)^2\ln(x)}{2}+\ln(1-x)\operatorname{Li}_2(1-x)+\operatorname{Li}_3(x)-\operatorname{Li}_3(1-x)$$
(with $C=\operatorname{Li}_3(1)=\zeta(3)$ here)
applied to $x=\dfrac 12$ with $\operatorname{Li}_2\left(\frac 12\right)=\dfrac{\zeta(2)-\ln(2)^2}2$ from the link returns the wished :
\begin{align}
\sum_{n=1}^\infty \frac{H_n}{n^2\;2^n}&=\zeta(3)-\frac{\ln(2)^3}2-\ln(2)\frac{\zeta(2)-\ln(2)^2}2\\
\tag{5}\sum_{n=1}^\infty \frac{H_n}{n^2\;2^n}&=\zeta(3)-\ln(2)\frac{\zeta(2)}2
\end{align}


elementary number theory - Product of four consecutive integers cannot equal the product of two

Question 5




Prove that the product of four consecutive positive integers cannot be
equal to the product of two consecutive positive integers.




So it must equal $n(n+1)(n+2)(n+3)$ hence it must be divisible by 24 (In the sequence there must be a factor of 4,2,3. This must equal to $k(k+1)$. As $k$ and $k+1$ are co prime either $k$ or $k+1$ is divisible by 24, or one is divisible by 8 and the other by 3.



I run out of ideas since nothing pops out to me and the factorisations don't seem reveling. I also recognised the two formulae as 2*triangular numbers and 24*sum of the sum of triangular numbers, both appearing in pascals triangle. But thats more of a interesting observation then something useful in a proof.




I would appreciate a pointer how to proceed.

Saturday, October 19, 2019

Finding modular of a fraction

Im really into cryptography and to find the private key of a message I need to use modular arithmetic. I understand how modular arithmetic using a clock with whole numbers. But I get really stuck when I get to fractions, for example:





1/3 mod 8




How do I find a modular of a fraction? Is there a method for finding this?



Thanks in advance!

calculus - Solve a limit without L'Hopital: $ lim_{xto0} frac{ln(cos5x)}{ln(cos7x)}$

I need to solve this limit without using L'Hopital's rule. I have attempted to solve this countless times, and yet, I seem to always end up with an equation that's far more complicated than the one I've started with.



$$ \lim_{x\to0} \frac{\ln(\cos5x)}{\ln(\cos7x)}$$




Could anyone explain me how to solve this without using L'Hopital's rule ? Thanks!

calculus - How to calculate limit?


I'm puzzled with this limit. The answer is -0.5, but how to get it? $\lim_\limits{x \to \infty}1-x+\sqrt{\frac{x^3}{x+3}}$


Answer




I've tried to multiply by conjugate not only part with a variable, but all the expression, and this way I calculate it without replace and L'Hospital's Rule.


$\lim_\limits{x \to \infty} 1-x+\sqrt{\frac{x^3}{x+3}}=\lim_\limits{x \to \infty}\frac{\frac{x^3}{x+3}-(x-1)^2}{\sqrt{\frac{x^3}{x+3}}+x-1} = \lim_\limits{x \to \infty}\frac{-x^2+5x-3}{(x+3)\left(\frac{\sqrt{x^3}+\sqrt{x+3}(x-1)}{\sqrt{x+3}}\right)}=\lim_\limits{x \to \infty}\frac{-x^2+5x-3}{\sqrt{x^4+3x^3}+(x+3)(x+1)}$


Now we can see easily that factor before $x^2$ is $-1$ in numerator and $2$ in denominator, so the result is $-\frac{1}{2}$


Proof by induction that $frac1{n+1}+ frac1{n+2}+cdots+frac1{2n}=1-frac{1}{2}+cdots+frac{1}{2n-1}-frac{1}{2n}.$




Prove that for any positive integer, $$\frac1{n+1}+ \frac1{n+2}+\cdots+\frac1{2n} = \left(1-\frac1{2}\right)+\left(\frac1{3}-\frac1{4}\right)+\cdots+\left(\frac1{2n-1}-\frac1{2n}\right).$$



I have tried using a proof by induction but do not know how to approach the series.



Note: This identity may be established by a clever algebraic manipulation, as seen here, but I am curious as to how an inductive proof might work.


Answer



For each $n\geq 1$, let $S(n)$ denote the statement
$$
S(n) : 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots+\frac{1}{2n-1}-\frac{1}{2n}=\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n}.

$$
Note that the left-hand side constitutes the first $2n-1$ terms of what is called the alternating harmonic series.



Base step ($n=1$): Notice that the left side of $S(n)$ has denominators which range from $1$ to $2n$, whereas the denominators on the right range from $n+1$ to $2n$. Thus, for $n=1$, the denominators on the left range from $1$ to $2$, whereas on the right, they range from $1+1=2$ to $2\cdot 1=2$; that is, there is only one term on the right. Consequently, $S(1)$ say that $1-\frac{1}{2}=\frac{1}{2}$, and this is true.



Inductive step: For some fixed $k\geq 1$, assume the inductive hypothesis $S(k)$
$$
S(k) : 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots+\frac{1}{2k-1}-\frac{1}{2k}=\frac{1}{k+1}+\frac{1}{k+2}+\cdots+\frac{1}{2k}
$$
to be true. We must then show that $S(k+1)$ follows:

$$
S(k+1) : \underbrace{1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots+\frac{1}{2k+1}-\frac{1}{2k+2}}_{\text{LHS}}=\underbrace{\frac{1}{k+2}+\frac{1}{k+3}+\cdots+\frac{1}{2k+2}}_{\text{RHS}}.
$$
Starting with the left-hand side of $S(k+1)$ (and filling two more penultimate terms),
\begin{align}
\text{LHS} &= 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots+\frac{1}{2k-1}-\frac{1}{2k}+\frac{1}{2k+1}-\frac{1}{2k+2}\\[1em]
&= \frac{1}{k+1}+\frac{1}{k+2}+\cdots+\frac{1}{2k}+\frac{1}{2k+1}-\frac{1}{2k+2}\tag{by $S(k)$}\\[1em]
&= \frac{1}{k+2}+\cdots+\frac{1}{2k}+\frac{1}{2k+1}+\frac{1}{k+1}-\frac{1}{2k+2}\\[1em]
&= \frac{1}{k+2}+\cdots+\frac{1}{2k}+\frac{1}{2k+1}+\frac{2}{2k+2}-\frac{1}{2k+2}\\[1em]
&= \frac{1}{k+2}+\cdots+\frac{1}{2k}+\frac{1}{2k+1}+\frac{1}{2k+2}\\[1em]

&= \text{RHS},
\end{align}
we see that the right-hand side of $S(k+1)$ follows. This completes the inductive step $S(k)\to S(k+1)$.



Hence, by mathematical induction, for all $n\geq 1, S(n)$ is true. $\blacksquare$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...