Sunday, December 31, 2017

gamma function - Sum of series $frac{ 4}{10}+frac{4cdot 7}{10cdot 20}+frac{4cdot 7 cdot 10}{10cdot 20 cdot 30}+cdots cdots$





Sum of the series



$$\frac {4}{10}+\frac {4\cdot7}{10\cdot20}+ \frac {4\cdot7\cdot10}{10\cdot20\cdot30}+\cdots $$




Sum of series $\frac {4}{10}+\frac {4\cdot7}{10\cdot20}+ \frac {4\cdot7\cdot10}{10\cdot20\cdot30}+\cdots$



But i want to solve it using Beta ganmma function.




My Process follows



Let $$S = \frac{1}{10}\bigg[\frac{ 4}{1}+\frac{4\cdot 7}{1\cdot 2}+\frac{4\cdot 7 \cdot 10}{1\cdot 2 \cdot 3}+\cdots \cdots \bigg]$$



Let $\displaystyle a_{n} = \prod^{n}_{k=1}(3k+1)=3^n \prod^{n}_{k=1}\bigg(k+\frac{1}{3}\bigg)$



$\displaystyle =3^n\Gamma\bigg(n+\frac{4}{3}\bigg)\cdot \frac{1}{\Gamma \bigg(\frac{4}{3}\bigg)}$



And Let $\displaystyle b_{n} = \prod^{n}_{k=1}k = \Gamma (n+1)$




So $\displaystyle S =\frac{1}{10} \sum^{\infty}_{n=1}3^n \frac{\Gamma \bigg(n+\frac{4}{3}\bigg)\cdot \Gamma \bigg(\frac{2}{3}\bigg)}{\Gamma \bigg(\frac{2}{3}\bigg)\Gamma \bigg(\frac{4}{3}\bigg)\cdot \Gamma(n+1)}$



using $\displaystyle \Gamma(x)\cdot \Gamma(1-x) = \frac{\pi}{\sin (\pi x)}$ and



$\displaystyle \frac{\Gamma (a) \Gamma(b)}{\Gamma(a+b)} =B(a,b)= \int^{1}_{0}x^{a-1}(1-x)^{b-1}dx$



So $\displaystyle S = \frac{1}{10}\sum^{\infty}_{n=1}3^n\int^{1}_{0}x^{n+\frac{1}{3}}(1-x)^{-\frac{1}{3}}dx$



$\displaystyle =\frac{1}{10}\int^{1}_{0}\frac{3x}{1-3x}\cdot \bigg(\frac{x}{1-x}\bigg)^{\frac{1}{3}}dx$




Now put $\displaystyle \frac{x}{1-x} = t\Rightarrow x = \frac{t}{1+t} = 1-\frac{1}{1+t}$



So $\displaystyle S = \frac{3}{10}\int^{\infty}_{0}\frac{t^{\frac{4}{3}}}{(2t-1)(1+t)^2}dt$



Could some Help me to solve it, Thanks



although it has solved Here


Answer



For a complex number $z$ such that $|z|<1$, let

$$f(z):=\sum_{n=0}^\infty\,\prod_{k=1}^n\,\left(\frac{k+\frac{1}{3}}{k}\right)\,z^n\,.$$
Then, the required sum $S:=\displaystyle\sum_{n=1}^\infty\,\prod_{k=1}^n\,\left(\frac{3k+1}{10k}\right)$ satisfies
$$S=f\left(\frac{3}{10}\right)-1\,.$$
We claim that $f(z)=(1-z)^{-\frac43}$ for all complex numbers $z$ such that $|z|<1$. As a consequence,
$$S=\left(\frac{7}{10}\right)^{-\frac43}-1=\left(\frac{10}{7}\right)^{\frac43}-1\approx0.608926\,.$$



Note that, for $n=0,1,2,3,\ldots$,
$$\begin{align}t_n&:=\prod_{k=1}^n\,\left(\frac{k+\frac{1}{3}}{k}\right)\\&=\frac{\Gamma\left(n+\frac43\right)}{\Gamma\left(\frac43\right)\,\Gamma(n+1)}\\&=\frac{(n+1)\,\Gamma\left(n+\frac43\right)}{\Gamma\left(\frac43\right)\,\Gamma(n+2)}\,,\end{align}$$
where $\Gamma$ is the usual gamma function.
That is,

$$t_n=\frac{n+1}{\Gamma\left(\frac43\right)\,\Gamma\left(\frac23\right)}\,\text{B}\left(n+\frac43,\frac23\right)\,,$$
where $\text{B}$ is the usual beta function.
From the identity
$$\Gamma(x)\,\Gamma(1-x)=\frac{\pi}{\sin(\pi\,x)}\,,$$
we conclude that
$$\Gamma\left(\frac43\right)\,\Gamma\left(-\frac13\right)=\frac{\pi}{\sin\left(\frac{4\pi}{3}\right)}=-\frac{2\pi}{\sqrt{3}}\,.$$
Ergo,
$$\Gamma\left(\frac43\right)\,\Gamma\left(\frac23\right)=\left(-\frac13\right)\,\left(-\frac{2\pi}{\sqrt{3}}\right)=\frac{2\pi}{3\sqrt{3}}\,.$$
In other words,
$$t_n=\frac{3\sqrt{3}}{2\pi}\,(n+1)\,\int_0^1\,u^{n+\frac13}\,(1-u)^{-\frac13}\,\text{d}u\,.$$




We now obtain
$$f(z)=\frac{3\sqrt{3}}{2\pi}\,\int_0^1\,\left(\sum_{n=0}^\infty\,(n+1)\,u^{n+\frac13}\,z^n\right)\,(1-u)^{-\frac13}\,\text{d}u\,.$$
This gives
$$f(z)=\frac{3\sqrt{3}}{2\pi}\,\int_0^1\,\frac{u^{\frac13}\,(1-u)^{-\frac13}}{(1-zu)^2}\,\text{d}u\,.$$
Let $v:=\dfrac{u}{1-u}$. Then,
$$f(z)=\frac{3\sqrt{3}}{2\pi}\,\int_0^\infty\,\frac{v^{\frac13}}{\big(1+(1-z)v\big)^2}\,\text{d}v\,.$$



The integral
$$J:=\int_0^\infty\,\frac{v^{\frac13}}{\big(1+(1-z)v\big)^2}\,\text{d}v$$

can be evaluated using the same keyhole contour as in this thread. It is not difficult to see that
$$\begin{align}J&=\left(\frac{2\pi\text{i}}{1-\exp\left(\frac{2\pi\text{i}}{3}\right)}\right)\,\text{Res}_{v=-\frac{1}{1-z}}\left(\frac{v^{\frac13}}{\big(1+(1-z)v\big)^2}\right)
\\&=-\frac{\pi\,\exp\left(-\frac{\pi\text{i}}{3}\right)}{\sin\left(\frac{\pi}{3}\right)}\,\Biggl(-\frac{\exp\left(\frac{\pi\text{i}}{3}\right)}{3}\,(1-z)^{-\frac43}\Biggr)
\\&=\frac{2\pi}{3\sqrt{3}}\,(1-z)^{-\frac{4}{3}}\,.\end{align}$$
That is,
$$f(z)=(1-z)^{-\frac43}\text{ for all }z\in\mathbb{C}\text{ such that }|z|<1\,.$$



Well, the OP requested a long and combersome solution. The result can be easily proven via the binomial theorem:
$$(1-z)^{-\frac43}=\sum_{n=0}^\infty\,\binom{-\frac43}{n}\,(-1)^n\,z^n=\sum_{n=0}^\infty\,\prod_{k=1}^n\,\left(\frac{k+\frac13}{k}\right)\,z^n$$
for every complex number $z$ with $|z|<1$.



calculus - Are there any known methods to compute series $sum_{k=0}^{infty}2^k big(sum_{n=0}^{infty}frac{(-1)^n}{(n2^{k+1}+(2k+1))^3}big)$?



I would like to ask if there are any known methods to compute series like this one ?
$$\sum_{k=0}^{\infty}2^k \bigg(\sum_{n=0}^{\infty}\frac{(-1)^n}{(n2^{k+1}+(2k+1))^3}\bigg)$$
And their names so i can look for them if they exist.
I never studied double sums before that's why i am asking, thanks in advance.



Answer



We have



$\displaystyle \sum\limits_{k=0}^\infty 2^k \sum\limits_{n=0}^\infty \frac{(-1)^n}{(n2^{k+1}+2k+1)^3} =\sum\limits_{k=0}^\infty \frac{2^k}{(2k+1)^3} - \sum\limits_{k=0}^\infty \left(\frac{1}{2^{2k+3}}\sum\limits_{n=1}^\infty \frac{(-1)^{n-1}}{(n+\frac{2k+1}{2^{k+1}})^3}\right)$



and



$\displaystyle\sum\limits_{n=1}^\infty \frac{(-1)^{n-1}}{(n+a)^3}=\frac{1}{8}\left(\zeta(3,\frac{1}{2}+\frac{a}{2})-\zeta(3,1+\frac{a}{2})\right)\,$ . $\hspace{1cm}$ (see: Hurwitz zeta function)



It follows




$\displaystyle\sum\limits_{k=0}^\infty \frac{1}{2^{2k+3}}\sum\limits_{n=1}^\infty \frac{(-1)^{n-1}}{(n+\frac{2k+1}{2^{k+1}})^3}=\sum\limits_{k=0}^\infty \frac{1}{2^{2k+6}}\left(\zeta(3,\frac{1}{2}+\frac{2k+1}{2^{k+2}})-\zeta(3,1+\frac{2k+1}{2^{k+2}})\right)$



$\hspace{5.5cm}\approx 0.05957499...$



$\displaystyle\sum\limits_{k=0}^\infty \frac{2^k}{(2k+1)^3}$ is divergent because $\displaystyle\left(\frac{2^k}{(2k+1)^3}\right)_k$ is not a null sequence and



$\displaystyle\sum\limits_{k=0}^\infty \left(\frac{1}{2^{2k+3}}\sum\limits_{n=1}^\infty \frac{(-1)^{n-1}}{(n+\frac{2k+1}{2^{k+1}})^3}\right)$ is convergent therefore the initial series is divergent.


complex analysis - non constant bounded holomorphic function on some open set



this is an exercise I came across in Rudin's "Real and complex analysis" Chapter 16.




Suppose $\Omega$ is the complement set of $E$ in $\mathbb{C}$, where $E$ is a compact set with positive Lebesgue measure in the real line.



Does there exist a non-constant bounded holomorphic function on $\Omega$?



Especially, do this for $\Omega=[-1,1]$.






Some observations:




Suppose there exists such function $f$, then WLOG, we may assume $f$ has no zeros points in $\Omega$ by adding a large enough positive constant, then, $\Omega$ is simply-connected implies $\int_{\gamma}fdz=0$, for any closed curve $\gamma\subset \Omega$, how to deduce any contradiction?


Answer



Reading Exercise 8 of Chapter 16, I imagine Rudin interrogating the reader.




Let $E\subset\mathbb R$ be a compact set of positive measure, let $\Omega=\mathbb C\setminus E$, and define $f(z)=\int_E \frac{dt}{t-z}$. Now answer me!
a) Is $f$ constant?
b) Can $f$ be extended to an entire function?
c) Does $zf(z)$ have a limit at $\infty$, and if so, what is it?
d) Is $\sqrt{f}$ holomorphic in $\Omega$?
e) Is $\operatorname{Re}f$ bounded in $\Omega$? (If yes, give a bound)
f) Is $\operatorname{Im}f$ bounded in $\Omega$? (If yes, give a bound)
g) What is $\int_\gamma f(z)\,dz$ if $\gamma$ is a positively oriented loop around $E$?
h) Does there exist a nonconstant bounded holomorphic function on $\Omega$?




Part h) appears to come out of the blue, especially since $f$ is not bounded: we found that in part (e). But it is part (f) that's relevant here: $\operatorname{Im}f$ is indeed bounded in $\Omega$ (Hint: write it as a real integral, notice that the integrand has constant sign, extend the region of integration to $\mathbb R$, and evaluate directly). Therefore, $f$ maps $\Omega$ to a horizontal strip. It's a standard exercise to map this strip onto a disk by some conformal map $g$, thus obtaining a bounded function $g\circ f$.


calculus - How to find the closed form of the integral$int_{0}^{infty}f(x)frac{sin^nx}{x^m}dx$

Where f(x) is an even function,and also is a periodic functions,the period is $\pi$,and $n,m\in\Bbb N$,and n+m is odd number.
When n+m is even, I already have a solution.But when n+m is odd, I don't know how to solve it.I'd appreciate it if someone could help me with it.




When n+m is even,my answer is as follows:



If f (x) is an even function, and the period is $\pi$,we have:
$$\int_{0}^\infty f(x)\frac{\sin^nx}{x^m}dx=\int_{0}^\frac{\pi}{2}f(x)g_m(x)\sin^nxdx \qquad (1)$$



$n,m\in\Bbb N$,
Where the n+m is an even,and $g_m(x)$ in (1) is as follows:
$$g_m(x)=\begin{cases}\frac{(-1)^{m-1}}{(m-1)!}\frac{d^{m-1}}{dx^{m-1}}\left(\csc x\right),& \text{for n is odd and}\\[2ex]
\frac{(-1)^{m-1}}{(m-1)!}\frac{d^{m-1}}{dx^{m-1}}\left(\cot x\right),& \text{ for n is even .}
\end{cases}$$

——————————————————————————————————————————————————
Proof:
\begin{align}
\int_{0}^\infty f(x)\frac{\sin^nx}{x^m}dx&=\sum_{k=0}^\infty\int_{k\pi}^{(2k+1)\frac{\pi}{2}}f(x)\left(\frac{\sin ^nx}{x^m}\right)dx+\sum_{k=1}^\infty\int_{(2k-1)\frac{\pi}{2}}^{k\pi}f(x)\left(\frac{\sin^n x}{x^m}\right)dx\\
&=\sum_{k=0}^\infty\int_{0}^{\frac{\pi}{2}}f(x+k\pi)\left(\frac{\sin^n (x+k\pi)}{(x+k\pi)^m}\right)dx+\sum_{k=1}^\infty\int_{-\frac{\pi}{2}}^{0}f(x+k\pi)\left(\frac{\sin^n (x+k\pi)}{(x+k\pi)^m}\right)dx\\
&=\sum_{k=0}^\infty(-1)^{nk}\int_{0}^{\frac{\pi}{2}}f(x)\left(\frac{\sin^n x}{(x+k\pi)^m}\right)dx+\sum_{k=1}^\infty(-1)^{nk+n+m}\int_{0}^{\frac{\pi}{2}}f(-x)\left(\frac{\sin^n x}{(x-k\pi)^m}\right)dx\\
&=\int_{0}^{\frac{\pi}{2}}f(x)\sin^nx\left(\frac{1}{x^m}+\sum_{k=1}^\infty(-1)^{nk}\left[\frac{1}{(x+k\pi)^m}+\frac{(-1)^{n+m}}{(x-k\pi)^m}\right]\right)dx\\
&=\int_{0}^{\frac{\pi}{2}}f(x)\sin^nxg_m(x)dx
\end{align}
When n+m is an even,and we know by the Fourier series

\begin{align}
\csc x&=\frac{1}{x}+\sum_{k=1}^\infty(-1)^k\left(\frac{1}{x+k\pi}+\frac{1}{x-k\pi}\right)\\
\end{align}
and
\begin{align}
\cot x&=\frac{1}{x}+\sum_{k=1}^\infty\left(\frac{1}{x+k\pi}+\frac{1}{x-k\pi}\right)
\end{align}
Take the m-1 order derivative,thus we obtain $g_m(x)$.
——————————————————————————————————————————————————
Example:

\begin{align}
(1.)\qquad\int_{0}^{\infty}\frac{\sin^3x}{x}dx&=\int_{0}^{\frac{\pi}{2}}\sin^2xg_1(x)\sin xdx\\
&=\int_{0}^{\frac{\pi}{2}}\sin^2x\frac{1}{\sin x}\sin xdx\\
&=\int_{0}^{\frac{\pi}{2}}\sin^2xdx\\
&=\frac{\pi}{4}\\
\end{align}
\begin{align}
(2.)
\int_{0}^{\infty}(1+\cos^2x)\frac{\sin^2x}{x^2}dx
&=\int_{0}^{\frac{\pi}{2}}(1+\cos^2x)g_2(x)\sin^2xdx\\

&=\int_{0}^{\frac{\pi}{2}}(1+\cos^2x)\left(-\frac{d}{dx}\cot x\right)\sin^2xdx\\
&=\int_{0}^{\frac{\pi}{2}}(1+\cos^2x)\left(\frac{1}{\sin^2x}\right)\sin^2xdx\\
&=\int_{0}^{\frac{\pi}{2}}(1+\cos^2x)dx\\
&=\frac{\pi}{2}+\frac{\pi}{4}=\frac{3\pi}{4}\\
\end{align}
\begin{align}
(3.)
\int_{0}^{\infty}\frac{1}{(1+\cos^2x)}\frac{\sin^3x}{x^3}dx
&=\int_{0}^{\frac{\pi}{2}}\frac{\sin^3x}{(1+\cos^2x)}g_3(x)dx\\
&=\int_{0}^{\frac{\pi}{2}}\frac{\sin^3x}{(1+\cos^2x)}\left(\frac{1}{2}\frac{d^2}{dx^2}(\csc x)\right)dx\\

&=\int_{0}^{\frac{\pi}{2}}\frac{\sin^3x}{(1+\cos^2x)}\frac{(1+\cos^2x)}{2\sin^3x}dx\\
&=\int_{0}^{\frac{\pi}{2}}\frac{1}{2}dx=\frac{\pi}{4}\\
\end{align}

definite integrals - Real-Analysis Methods to Evaluate $int_0^infty frac{x^a}{1+x^2},dx$, $|a|




In THIS ANSWER, I used straightforward contour integration to evaluate the integral $$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty \frac{x^a}{1+x^2}\,dx=\frac{\pi}{2}\sec\left(\frac{\pi a}{2}\right)}$$for $|a|<1$.





An alternative approach is to enforce the substitution $x\to e^x$ to obtain



$$\begin{align}
\int_0^\infty \frac{x^a}{1+x^2}\,dx&=\int_{-\infty}^\infty \frac{e^{(a+1)x}}{1+e^{2x}}\,dx\\\\
&=\int_{-\infty}^0\frac{e^{(a+1)x}}{1+e^{2x}}\,dx+\int_{0}^\infty\frac{e^{(a-1)x}}{1+e^{-2x}}\,dx\\\\
&=\sum_{n=0}^\infty (-1)^n\left(\int_{-\infty}^0 e^{(2n+1+a)x}\,dx+\int_{0}^\infty e^{-(2n+1-a)x}\,dx\right)\\\\
&=\sum_{n=0}^\infty (-1)^n \left(\frac{1}{2n+1+a}+\frac{1}{2n+1-a}\right)\\\\
&=2\sum_{n=0}^\infty (-1)^n\left(\frac{2n+1}{(2n+1)^2-a^2}\right) \tag 1\\\\
&=\frac{\pi}{2}\sec\left(\frac{\pi a}{2}\right)\tag 2

\end{align}$$



Other possible ways forward include writing the integral of interest as



$$\begin{align}
\int_0^\infty \frac{x^a}{1+x^2}\,dx&=\int_{0}^1 \frac{x^{a}+x^{-a}}{1+x^2}\,dx
\end{align}$$



and proceeding similarly, using $\frac{1}{1+x^2}=\sum_{n=0}^\infty (-1)^nx^{2n}$.





Without appealing to complex analysis, what are other approaches one can use to evaluate this very standard integral?




EDIT:




Note that we can show that $(1)$ is the partial fraction representation of $(2)$ using Fourier series analysis. I've included this development for completeness in the appendix of the solution I posted on THIS PAGE.



Answer




I'll assume $\lvert a\rvert < 1$. Letting $x = \tan \theta$, we have



$$\int_0^\infty \frac{x^a}{1 + x^2}\, dx = \int_0^{\pi/2}\tan^a\theta\, d\theta = \int_0^{\pi/2} \sin^a\theta \cos^{-a}\theta\, d\theta$$



The last integral is half the beta integral $B((a + 1)/2, (1 - a)/2)$, Thus



$$\int_0^{\pi/2}\sin^a\theta\, \cos^{-a}\theta\, d\theta = \frac{1}{2}\frac{\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right)}{\Gamma\left(\frac{a+1}{2} + \frac{1-a}{2}\right)} = \frac{1}{2}\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right)$$



By Euler reflection,




$$\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right) = \pi \csc\left[\pi\left(\frac{1+a}{2}\right)\right] = \pi \sec\left(\frac{\pi a}{2}\right)$$



and the result follows.



Edit: For a proof of Euler reflection without contour integration, start with the integral function $f(x) = \int_0^\infty u^{x-1}(1 + u)^{-1}\, du$, and show that $f$ solves the differential equation $y''y - (y')^2 = y^4$, $y(1/2) = \pi$, $y'(1/2) = 0$. The solution is $\pi \csc \pi x$. On the other hand, $f(x)$ is the beta integral $B(1+x,1-x)$, which is equal to $\Gamma(x)\Gamma(1-x)$. I believe this method is due to Dedekind.


probability - Expected value from popping bubbles




This is a fairly simple math problem from a programming challenge but I'm having trouble wrapping my head around it. What follows is a paraphrase of the problem:



Kundu has a Bubble Wrap and like all of us she likes popping it. 
The Bubble wrap has dimensions NxM, i.e. it has N rows and each row has
M cells which has a bubble. Initially all bubbles in filled with air and
can be popped.


What Kundu does is randomly picks one cell and tries to pop it,
there might be a case that the bubble Kundu selected is already popped.
In that case he has to ignore this. Both of these steps take 1 second of
time. Tell the total expected number of seconds in which Kundu would
be able to pop them all.


So I know that the expected value is the sum of the product of the random variable x and the probability of that instance of x occurring but I'm having trouble parameterising the problem statement. What's the x and how does time play into it?


Answer



One approach, which I like because it is rather general, is to use a Markov chain.




There are $B$ total bubbles. After $t$ attempts, $U(t)$ of them are left. She pops a bubble with probability $U(t)/B$ and finds a popped bubble with probability $1-U(t)/B$. Call $\tau$ the random number of attempts that it takes to pop all the bubbles. Let $F(u)=E_u(\tau)$, where $E_u$ means that we take the expectation with $U(0)=u$. Then by the total expectation formula



$$F(u)=(F(u-1)+1)(u/B)+(F(u)+1)(1-u/B).$$



This is coupled to the natural boundary condition $F(0)=0$. This recurrence for $F$ turns out to be very easy to solve, as you can see by rearranging it. Your solution is $F(B)$.


Saturday, December 30, 2017

combinatorics - The largest integer dividing these $(n+19)(n+18)(n+17)(n+16)$




The largest integer that divides $(n+19)(n+18)(n+17)(n+16) ~ \forall ~ n \in \mathbb{N}? $ is?




It struck me that we should write this as:




$\dbinom{n+19}{4}\times 4!$



which is always divisible by $4!$. Now I am confused as to how do I take it from here? Because the question asks for largest positive integer.



Edit:



Is there no alternative to brute force method i.e. checking for few values then assuming the event would never occur?


Answer



Let $f(n) = (n + 16)(n + 17)(n + 18)(n + 19)$.




Then we have that
$$
\gcd(f(1), f(4), f(5)) = \gcd(17 \cdot 18 \cdot 19 \cdot 20, 20 \cdot 21 \cdot 22 \cdot 23, 21 \cdot 22 \cdot 23 \cdot 24) = \gcd(2^3 \cdot 3^2 \cdot 5 \cdot 17 \cdot 19, 2^3 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdot 23, 2^4 \cdot 3^2
\cdot 7 \cdot 11 \cdot 23) = 2^3 \cdot 3 = 24.
$$



Thus any number that divides $f(n)$ for all $n$ can be at most $24$. You've already shown that $24$ works, so $24$ is the largest such number.







Edit: Why did I consider $n = 1$, $n = 4$, and $n = 5$ in particular?



I took $n = 1$ because it is the smallest value that $n$ is allowed to be, and I'd prefer to work with fairly small numbers.



We now continue with the conjecture that the answer should be $24$. The number $f(1)$ has $17$ and $19$ as factors, so to rule out that these factors will divide $f(n)$ for all $n$, we need to find a $n$ such that $f(n)$ is not divisible by $17$ or $19$. If $n$ is less than $4$, then we can see that one of the numebrs that we are multiplying by to get $f(n)$ is going to be $19$, (since the smallest of the four numbers is $n + 16$, we need $n + 16 > 19$), we see that any $n$ such that $f(n)$ is not divisible by $19$ must be at least $5$. In accordance with our preference for using small numbers, we consider the value of $f(5)$. The other reason that we would like to use a number that is at least $5$ is that otherwise $20$ will divide $f(n)$, and we want to rule out the possibility of $5$ dividing $f(n)$ for all $n$.



The problem we now have is that both $f(1)$ and $f(5)$ are divisible by $9$, so we need to find an $n$ such that $f(n)$ is divisible by $3$, but not by $9$. Now if $(n + 16)$ is divisible by $3$, then so is $(n + 19)$, and so to avoid having a factor of $9$, we need to consider an $n$ such that $(n + 16)$ is not divisible by $3$. This rules out $n = 2$. It turns out that $n = 3$ works, but I overlooked this for some reason and went with $n = 4$ instead.


modular arithmetic - Modulus of Fraction




I just want to confirm I am doing this problem correctly.



The problem asks to compute without a calculator:
$$
3 * \frac{2}{5} \pmod 7
$$

The way I am solving the problem:
$$
3 * \frac{2}{5} \bmod 7 \\
3 * 2 * \frac{1}{5} \bmod 7\\

\gcd(5,7) = 1 (\text{inverse exists})
$$

$$
(((3 \bmod 7) * (2 \bmod 7) * (1 \bmod 7)) \bmod 7) \\
(3 * 2 * 1) \bmod 7
$$

$$
6 \bmod 7 = 6
$$




Am I doing this correctly? Just started learning this in class. This is an even number practice problem out the book so I cannot check the answer lol.


Answer



We want to find $(3)(2)(x)\pmod{7}$, where $x$ is the inverse of $5$ modulo $7$, that is, where $5x\pmod 7=1$.



There are general procedures for finding inverses modulo $m$, but $7$ is a very small number, so we can do it efficiently by trial and error. Note that $(5)(3)$ has remainder $1$ on division by $7$. So $x\pmod 7=3$.



Thus we want to compute $(3)(2)(3)\pmod 7$. This is $4$.


Cauchy functional equation

Is there $U\subset \Bbb R^2$ with Lebesgue measure $0$ such that


$$f(x+y)=f(x)+f(y)$$ for all $(x, y)\in U$ implies $f(x+y)=f(x)+f(y)$ for all $(x, y)\in\Bbb R^2$ ?

Summation of series involving binomial coefficients



Find the following sum:



$$\sum _{r=0}^{15} {^{15}C_{r}} \hspace{2 mm} (15-r)^{15} \hspace{2 mm} (-1)^r$$



Now here the issue I am facing is that exponent of $(15-r)$ is fixed but the base is varying so how do we deal with that.I also tried writing series in reverse order but add up two equations but that also didn't help. Could someone help me with this.


Answer



Let $A=\{1,\ldots,15\}$. I claim that the sum
$$\sum_{r=0}^\infty(-1)^r\binom{15}r(15-r)^{15}$$

is the number of surjective functions from $A$ to $A$
(so $15!$). Let $F$ be the set of all functions from $A$
to $A$. Then $|F|=15^{15}$, the $r=0$ term of the sum.
Let $F_j$ be the set of functions in $F$ with $f(k)\ne j$
for all $k$. The surjective functions in $F$ are those in
$$F-(F_1\cup F_2\cup\cdots\cup F_{15}).$$
Use inclusion-exclusion to count this set. There are $\binom{15}r$
$r$-fold intersections of the $F_j$, and each of these has $(15-r)^{15}$
elements.


abstract algebra - Proof: the square root of the product of two distinct primes is irrational

I'd like to prove that the product of the roots of two distinct primes $p_1$ and $p_2$ is irrational.
That is,
$$
\sqrt{p_1 p_2} \notin \mathbb{Q}

$$



Would the following be a valid proof?




Suppose $\sqrt{p_1 p_2} \in \mathbb{Q}$.



Then $\sqrt{p_1 p_2} = \frac{a}{b}$ for $a,b \in \mathbb{Z}$ such that $\gcd(a,b) = 1$.
$$
p_1 p_2 = \frac{a^2}{b^2} \implies p_1p_2b^2 = a^2 \implies p_1, p_2 \mid a^2 \implies p_1, p_2 \mid a.

$$
So $a = p_1 p_2 n$, for nonzero $n \in \mathbb{Z}$.



Let the unique prime factorization of $a = p_1 p_2 (p_{i_1} \cdots p_{i_n})$ and of $b=p_{j_1} \cdots p_{j_m}$.



Then
$$
p_1p_2 = \frac{a^2}{b^2} = \frac{p_1^2 p_2^2 (p_{i_1}^2 \cdots p_{i_n}^2)}{p_{j_1}^2 \cdots p_{j_m}^2} = p_1 p_2 \underbrace{\left( \frac{p_1 p_2 (p_{i_1}^2 \cdots p_{i_n}^2)}{p_{j_1}^2 \cdots p_{j_m}^2} \right)}_{=1}
$$
But this implies that $p_1 = p_{j_t}$ and $p_2 = p_{j_s}$ for $t, s \in [1,m]$. That is $p_1, p_2 \mid b$.




Therefore, $p_1$ and $p_2$ are divisors of both $a$ and $b$, which contradicts $\gcd(a,b)=1$.



Hence, $\sqrt{p_1 p_2} \notin \mathbb{Q}$.




P.S. Sorry for the double subscripts.

algebra precalculus - How to find the magnitude squared of square root of a complex number


I'm trying to simplify the expression


$$\left|\sqrt{a^2+ibt}\right|^2$$


where $a,b,t \in \Bbb R$.


I know that by definition



$$\left|\sqrt{a^2+ibt}\right|^2 = \sqrt{a^2+ibt}\left(\sqrt{a^2+ibt}\right)^*$$


But how do you find the complex conjugate of the square root of a complex number? And what is the square root of a complex number (with arbitrary parameters) for that matter?


Answer



For any complex number $z$, and any square root $\sqrt{z}$ of $z$ (there are two), we have $$\bigl|\sqrt{z}\bigr|=\sqrt{|z|}$$ Therefore $$\bigl|\sqrt{a^2+ibt}\bigr|^2=\sqrt{|a^2+ibt|^2}=|a^2+ibt| = \sqrt{a^4+b^2t^2}$$


calculus - Is it possible for a continuous function to have a nowhere-continuous derivative?

This is motivated by a question I saw elsewhere that asks whether there is a real-valued function on an interval that contains no monotone subintervals.



Edit: Note that I am asking for a function whose derivative exists but is not continuous anywhere.

Friday, December 29, 2017

elementary number theory - Prove $gcd(k, l) = d Rightarrow gcd(2^k - 1, 2^l - 1) = 2^d - 1$

This is a problem for a graduate level discrete math class that I'm hoping to take next year (as a senior undergrad). The problem is as stated in the title:





Given that $\gcd(k, l) = d$, prove that $\gcd(2^k - 1, 2^l - 1) = 2^d
- 1$.




The problem also says "hint: use Euclid's lemma," although I'm not sure what part of the problem should be applied to. I've been thinking about it for a few days and I'm completely stumped.



I'm not really even sure how to show that it divides either $2^k - 1$ or $2^l - 1$. From the given, we know that $\exists c: dc = k$. We need to show that $\exists c': (2^d - 1)c' = 2^k - 1$. Obviously $c' = 2^c$ gives you $(2^d - 1)c' = 2^k - 2^c$, but I can't figure out how to get rid of the extra terms that the $- 1$ brings in in various places.



From Euclid's lemma on the left side, you know $\exists i: di = k - l$, and applying it on the right side, you know it suffices to show that $\gcd(2^k - 2^l, 2^l - 1) = 2^d - 1$. And by Bezout's identity, it's enough to show that $2^d - 1$ can be written as a linear combination of either of those things.




Can anyone give me a hint?

calculus - Struggling with an integral with trig substitution



I've got another problem with my CalcII homework. The problem deals with trig substitution in the integral for integrals following this pattern: $\sqrt{a^2 + x^2}$. So, here's the problem:




$$\int_{-2}^2 \frac{\mathrm{d}x}{4 + x^2}$$



I graphed the function and because of symmetry, I'm using the integral: $2\int_0^2 \frac{\mathrm{d}x}{4 + x^2}$



Since the denominator is not of the form: $\sqrt{a^2 + x^2}$ but is basically what I want, I ultimately decided to take the square root of the numerator and denominator:



$$2 \int_0^2 \frac{\sqrt{1}}{\sqrt{4+x^2}}\mathrm{d}x = 2 \int_0^2 \frac{\mathrm{d}x}{\sqrt{4+x^2}}$$



From there, I now have, using the following: $\tan\theta = \frac{x}{2} => x = 2\tan\theta => dx = 2\sec^2\theta d\theta$




$$
\begin{array}{rcl}
2\int_{0}^{2}\frac{\mathrm{d}x}{4+x^2}\mathrm{d}x & = & \sqrt{2}\int_{0}^{2}\frac{\mathrm{d}x}{\sqrt{4+x^2}}\mathrm{d}x \\
& = & \sqrt{2}\int_{0}^{2}\frac{2\sec^2(\theta)}{\sqrt{4+4\tan^2(\theta)}}\mathrm{d}\theta \\
& = & \sqrt{2}\int_{0}^{2}\frac{2\sec^2(\theta)}{2\sqrt{1+\tan^2(\theta)}}\mathrm{d}\theta \\
& = & \sqrt{2}\int_{0}^{2}\frac{\sec^2(\theta)}{\sqrt{\sec^2(\theta)}}\mathrm{d}\theta \\
& = & \sqrt{2}\int_{0}^{2}\frac{\sec^2(\theta)}{\sec(\theta)}\mathrm{d}\theta \\
& = & \sqrt{2}\int_{0}^{2}\sec(\theta)\mathrm{d}\theta \\
& = & \sqrt{2}\left [\ln{\sec(\theta)+\tan(\theta)} \right|_{0}^{2}] \\

& = & \sqrt{2}\left [ \ln{\frac{\sqrt{4+x^2}}{2}+\frac{x}{2} } \right|_{0}^{2} ]
\end{array}
$$



I'm not sure if I've correctly made the integral look like the pattern it's supposed to have. That is, trig substitutions are supposed to be for $\sqrt{a^2 + x^2}$ (in this case that is, there are others). This particular problem is an odd numbered problem and the answer is supposed to be $\frac{\pi}{4}$. I'm not getting that. So, the obvious question is, what am I doing wrong? Also note, I had trouble getting the absolute value bars to produce for the ln: don't know what I did wrong there either.



Thanks for any help,
Andy


Answer



Hint: you can cut your work considerably by using the trig substitution directly into the proper integral, and proceeding (no place for taking the square root of the denominator):




You have $$2\int_0^2 \frac{dx}{4+x^2}\quad\text{and NOT} \quad 2\int_0^2 \frac{dx}{\sqrt{4+x^2}}$$



But that's good, because this integral (on the left) is what you have and is already in in the form where it is appropriate to use the following substitution:



Let $x = 2 \tan \theta$, which you'll see is standard for integrals of this form.






As suggested by Andrew in the comments, we can arrive at his suggested result, and as shown in Wikipedia:




Given any integral in the form



$$\int\frac{dx}{{a^2+x^2}}$$



we can substitute



$$x=a\tan(\theta),\quad dx=a\sec^2(\theta)\,d\theta, \quad \theta=\arctan\left(\tfrac{x}{a}\right)$$



Substituting gives us:




$$
\begin{align} \int\frac{dx}{{a^2+x^2}}
& = \int\frac{a\sec^2(\theta)\,d\theta}{{a^2+a^2\tan^2(\theta)}} \\ \\
& = \int\frac{a\sec^2(\theta)\,d\theta}{{a^2(1+\tan^2(\theta))}} \\ \\
& = \int \frac{a\sec^2(\theta)\,d\theta}{{a^2\sec^2(\theta)}} \\ \\
& = \int \frac{d\theta}{a} \\ &= \tfrac{\theta}{a}+C \\ \\
& = \tfrac{1}{a} \arctan \left(\tfrac{x}{a}\right)+C \\ \\
\end{align}
$$




Note, you would have gotten precisely the correct result had you not taken the square root of $\sec^2\theta$ in the denominator, i.e., if you had not evaluated the integral of the square root of your function.


discrete mathematics - How many zeroes are there at the end of $36!^{36!}$?

Could you please tell me how many zeroes are there at the end of $36!$ to the power $36!$, i.e., $36!^{36!}$? I have been trying to find out. Read some reviews and answers related this but didn't understand at all. Thanks for the help.

limits - About the nature of continuity of trigonometric functions and equality

I was recently somewhat confused by the result of an exercise from a textbook that read:



Question How many solutions are there to the equation $(\tan x)\sin^2(2x)=\cos x$, $-2\pi \leq x \leq 2\pi$?


Correct choice [[from a multiple choice question]] 8


Explanation: Using your GDC to graph the two functions $y=(\tan x)\sin^2(2x)$ and $y=\cos x$ and counting the intersection points over the interval $[-2\pi , 2\pi]$



This is however contrary to what my original answer was, which is $4$. After some research, asking a teacher, and briefly consulting a tutor from the textbook's company, I am still in doubt concerning where my approach is flawed. It is thence that I approach this community in hopes that someone can lead me to a realization or comment on my procedure. My best guess is that maybe I have conceptualized something wrong, maybe to do with limits, or the nature of equality(?). At any rate, I am humbly lost and it would be very gracious from any of you to help, I would really appreciate it. So here we go:



Relevant ideas and concepts considered under my approach


Equality of functions To my knowledge, in order for two functions to be defined as equal (and thus be mathematically the same, even if represented differently) their domains must be formally equal to each other.


Equality of sets To my knowledge, the following is accurate and varitable:



Two sets are equal if and only if they have the same elements. [...] for any sets A and B, $A=B\iff[x\in A\iff x\in B]\forall x$



My argument


Let $f(x)=\tan[x]\sin^2[2x]$ and $g(x)=\cos[x]$ both restricted by definition to $-2\pi \leq x \leq 2\pi$


To find: solutions to the equation $f(x)=g(x)$


The solutions to the equation can be found graphically, by discerning the amount of intersections between their graphs, naturally noting the boundaries of the restricted general domain of the equation. Alternatively, but in a similar fashion, one can build the set $f \cap g$ and determine its cardinality.



Relevant, however, is that $f(x)$ has its domain restricted further by the nature of the function $\tan(x)$, which is undefined for any value where $x=\frac{(2n-1)\pi}{2}$ for any integer $n$. Graphically, $\tan(x)$ presents asymptotic behaviour at $x=\frac{(2n-1)\pi}{2}$. Let $\mathbb S=\lbrace\frac{-3\pi}{2},\frac{-\pi}{2},\frac{\pi}{2},\frac{3\pi}{2}\rbrace$, the set of x-coordinates where $tan(x)$ presents asymptotes under the restricted domain $[-2\pi,2\pi]$.


Since $\tan(x)$ is a component of the function $f$, then the domain of $f$ $\mathbb D_f= [-2\pi,2\pi]\setminus \mathbb S$, for if $tan(x)$ is undefined for a specific $x$ so is $\tan[x]\sin^2[2x]$ (for that same $x$). The function $f$ is thusly discontinuous.


It then follows that the set of ordered pairs in the form $(x,y)$ representing the relation function of $f$ does not contain any element for which the x value is contained in the set $\mathbb S$.


Plotting $f$ and $g$ on a graph results in the following image:


//apparently I can't post images 'yet'


jpg image https://pbs.twimg.com/media/CeBRIMMUEAEWBc5.jpg


online graph https://www.desmos.com/calculator/8ypryliymf


Both functions appear to intersect at 8 distinct points. However, if we also plot $x \in \mathbb S$ we can cognise that 4 of these intersections are 'invalid', for $f$ is undefined for those $x$ values. $f \cap g \neq 8$, rather $f \cap g = 4$


So,


being explicit on my conclusion, I would state that I believe that: The domain of $\tan(x)$ does not include any element of $\mathbb S=\lbrace\frac{-3\pi}{2},\frac{-\pi}{2},\frac{\pi}{2},\frac{3\pi}{2}\rbrace$



Then (for the set of ordered pairs $f$) $f \not \ni (x,0) \forall x \in \mathbb S$


i.e. $f \not \supset \lbrace(\frac{-3\pi}{2},0), (\frac{-\pi}{2},0), (\frac{\pi}{2},0), (\frac{3\pi}{2},0)\rbrace$


Then $\mathbb D_f = [-2\pi,2\pi] \setminus \mathbb S \neq [-2\pi,2\pi] = \mathbb D_g = \mathbb D_{\cos x}$


To which ultimately follows that $f(x) \neq g(x) \forall x \in \mathbb S$


Then $ \lvert f \cap g \rvert = 4 \neq 8$.


-end of my argument-


I presented this idea to a [presumed] mathematician, to which they replied as follows:



I understand why you have said what you have done. Allow me to suggest another way of thinking. If we replace $\tan$ with $\frac{\sin}{\cos}$, then multiply both sides by $cos$, we end up with $\sin(x)\sin^2(2x)=\cos^2(x)$, which is fully defined at all values of $x$, and has $8$ possible solutions. The $tan$ function is undefined at certain values of $x$, in that the value of $\tan$ tends to infinity. However, when we multiply anything (including infinity) by zero, we do always obtain zero.




While I am not entirely comfortable with algebra including 'infinities' (cfr. "multiply infinity by zero"), I think I understand where they wanted to go. I however would insist that even though mathematically and by definition (if I recall correctly) I agree that $\tan(x)\sin^2(2x) = \frac{\sin(x)}{\cos(x)}\sin^2(2x)$ I cannot find myself able to agree to the underlying implied proposition that $f(x)=\frac{\sin(x)}{\cos(x)}\sin^2(2x) \forall x \in [-2\pi,2\pi]$. Which is, to my mind, strictly important to the nature of the problem at hand.


In my mind, that manipulation of the function is a form of removing the discontinuity of $f$, but it (in my mind) is a process whereby the nature of the function is altered and does not remain strictly 'equal' under the proper definition of 'equality' in terms of functions.


It is then that my question about the nature of discontinuities and equalities is:


If $$\tan(x)=\frac{\sin(x)}{\cos(x)}$$ does it logically follow that $$f(\frac{(2n-1)\pi}{2})=0$$ (which, I want to emphasize, is not the same statement as $\sin^2(2[\frac{(2n-1)\pi}{2}])=0$)


???


// I am a high-school student, so I proffer my sincere apologies if any of the \above notation is inappropriate or if the terminology is not on point.

Limit involving exponentials and arctangent without L'Hôpital



$$\lim_{x\to0}\frac{\arctan x}{e^{2x}-1}$$
How to do this without L'Hôpital and such? $\arctan x=y$, then we rewrite it as $\lim_{y\to0}\frac y{e^{2\tan y}-1}$, but from here I'm stuck.


Answer



I thought it might be instructive to present a way forward that goes back to "basics." Herein, we rely only on elementary inequalities and the squeeze theorem. To that end, we proceed with a primer.





PRIMER ON A SET OF ELEMENTARY INEQUALITIES:



In THIS ANSWER, I showed using only the limit definition of the exponential function and Bernoulli's Inequality that the exponential function satisfies the inequalities



$$\bbox[5px,border:2px solid #C0A000]{1+x\le e^x\le \frac{1}{1-x}} \tag 1$$



for $x<1$.



And in THIS ANSWER, I showed using only elementary inequalities from geometry that the arctangent function satisfies the inequalities




$$\bbox[5px,border:2px solid #C0A000]{\frac{|x|}{\sqrt{1+x^2}}\le |\arctan(x)|\le |x|} \tag 2$$



for all $x$.







Using $(1)$ and $(2)$ we can write for $1>x>0$



$$\frac{x}{\sqrt{1+x^2}\left(\frac{2x}{1-2x}\right)}\le \frac{\arctan(x)}{e^{2x}-1}\le \frac{x}{2x} \tag 3$$




whereupon applying the squeeze theorem to $(3)$, we find that



$$\lim_{x\to 0^+}\frac{\arctan(x)}{e^{2x}-1}=\frac12$$



Similarly, using $(1)$ and $(2)$ for $x<0$ we can write



$$\frac{x}{\left(\frac{2x}{1-2x}\right)}\le \frac{\arctan(x)}{e^{2x}-1}\le \frac{x}{\sqrt{1+x^2}\,\left(2x\right)} \tag 4$$



whereupon applying the squeeze theorem to $(4)$, we find that




$$\lim_{x\to 0^-}\frac{\arctan(x)}{e^{2x}-1}=\frac12$$




Inasmuch as the limits from the right and left sides are equal we can conclude that



$$\bbox[5px,border:2px solid #C0A000]{\lim_{x\to 0}\frac{\arctan(x)}{e^{2x}-1}=\frac12}$$



calculus - determine unit outward normal vector on a curve

It is necessary for me to find unit outward normal vector for the curve:



$$\gamma=(x(t),y(t))$$



where
$$x(t)=(0.6)\cos(t)-(0.3)\cos(3t)$$
and
$$y(t)=(0.7)\sin(t)+(0.07)\sin(7t)+(0.1)\sin(3t)$$
I know how to find unit outward normal vector for this: using




$$T=\frac{\gamma'(t)}{||\gamma(t)||},\;\text{ so }\,N=\frac{T'(t)}{||T(t)||}$$



but my problem is that I do not have $t$. Just I have $x(t)$ and $y(t)$.
How could I find $t$ or $N$ without need to $t$.



Is there any command in MATLAB or MAPLE to this?

Thursday, December 28, 2017

functions - $f(A cap B)subset f(A)cap f(B)$, and otherwise?


I got a serious doubt ahead the question



Be $f:X\longrightarrow Y$ a function. If $A,B\subset X$, show that $f(A \cap B)\subset f(A)\cap f(B)$



I did as follows


$$\forall\;y\in f(A\cap B)\Longrightarrow \exists x\in A\cap B, \text{ such that } f(x)=y\\ \Longrightarrow x \in A\text{ and }x\in B\Longrightarrow f(x)\in f(A)\text{ and }f(x)\in f(B)\\ \Longrightarrow f(x)\in f(A)\cap f(B)\Longrightarrow y\in f(A)\cap f(B)$$


This ensures that $\forall y \in f(A\cap B)$ then $y\in f(A)\cap f(B)$, therefore $f(A\cap B)\subset f(A)\cap f(B)$.


Okay, we have the full demonstration.


We know that for equality to be valid, then $ f $ must be injective. But my question is when should I see that equality is not worth, not by counter example, but finding an error in the following demonstration



$$\forall\;y\in f(A)\cap f(B)\Longrightarrow y\in f(A)\text{ and }y\in f(B) \Longrightarrow \\ \exists x\in A \text{ and } B, \text{ such that } f(x)=y\\ \Longrightarrow x \in A\cap B\ \Longrightarrow f(x)\in f(A\cap B)\Longrightarrow y\in f(A\cap B)$$


Where is the error in the statement? Which of these steps can not do and why?


Answer



you wrote :


$$\forall\;y\in f(A)\cap f(B)\Longrightarrow y\in f(A)\text{ and }y\in f(B) \Longrightarrow \\ \exists x\in A \text{ and } B, \text{ such that } f(x)=y\\ $$


The problem is in the last implication : from $y\in f(A)\text{ and }y\in f(B)$ you get that there exist $x_A\in A$ and $x_B\in B$ such that $f(x_A)=y=f(x_B)$, you cannot assume that $x_A=x=x_B$.


Wednesday, December 27, 2017

trigonometry - Limits of cosine and sine




When $\theta$ is very small why $\sin \theta$ is similar to $\theta$ and $\cos\theta$ similar to $1$? Is it related to limits or we can prove it simply by using diagrams?


Answer



On the unit circle, $\theta$ is the length of the arc (as well as the angle extended by that arc). (Thus, perimeter of the unit circle is $2\pi$). Whereas, $\cos\theta$ is the length of the $X$ intercept, and $\sin\theta$ is the length of the $Y$ intercept.




Look at the following diagram:
enter image description here



You can now easily visualize that when Point P approaches closer to $(1,0)$, then $\theta \rightarrow \ 0$. At this time, the arc in question will become almost a vertical line, and the $Y$ intercept of the arc is almost the same length as the arc.



Hence as $\theta \rightarrow \ 0$ then $\sin\theta \rightarrow \theta$



And, at that time, the length of the $X$ intercept will get closer and closer to $1$.




Hence as $\theta \rightarrow \ 0$ then $\cos\theta \rightarrow 1$



Also, from this figure, you can easily visualize that when Point P approaches $(0,1)$, the $Y$ intercept will approach $1$ and the $X$ intercept will have same length as the length of the remaining part of the arc (from point P to point $(0,1)$)
which is $(\frac{\pi}{2} - \theta)$. (Remember that total length of the arc from $(1,0)$ to $(0,1)$ is $\frac{\pi}{2}$).



Thus, we have:



$\theta \rightarrow \frac{\pi}{2}$ then $\sin\theta \rightarrow 1$, and



$\theta \rightarrow \frac{\pi}{2}$ then $\cos\theta \rightarrow (\frac{\pi}{2}-\theta)$



Probability Puzzle: Mutating Loaded Die



Take an (initially) fair six-sided die (i.e. $P(x)=\frac{1}{6}$ for $x=1,…,6$) and roll it repeatedly.



After each roll, the die becomes loaded for the next roll depending on the number $y$ that was just rolled according to the following system:



$$P(y)=\frac{1}{y}$$
$$P(x)=\frac{1 - P(y)}{5} \text{, for } x \ne y$$




i.e. the probability that you roll that number again in the next roll is $\frac{1}{y}$ and the remaining numbers are of equal probability.



What is the probability that you roll a $6$ on your $n$th roll?





NB: This is not a homework or contest question, just an idea I had on a boring bus ride. Bonus points for calculating the probability of rolling the number $x$ on the $n$th roll.


Answer



The transition matrix is given by $$\mathcal P = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ \tfrac{1}{10} & \tfrac{1}{2} & \tfrac{1}{10} & \tfrac{1}{10} & \tfrac{1}{10} & \tfrac{1}{10} \\ \tfrac{2}{15} & \tfrac{2}{15} & \tfrac{1}{3} & \tfrac{2}{15} & \tfrac{2}{15} & \tfrac{2}{15} \\ \tfrac{3}{20} & \tfrac{3}{20} & \tfrac{3}{20} & \tfrac{1}{4} & \tfrac{3}{20} & \tfrac{3}{20} \\ \tfrac{4}{25} & \tfrac{4}{25} & \tfrac{4}{25} & \tfrac{4}{25} & \tfrac{1}{5} & \tfrac{4}{25} \\ \tfrac{1}{6} & \tfrac{1}{6} & \tfrac{1}{6} & \tfrac{1}{6} & \tfrac{1}{6} & \tfrac{1}{6} \end{bmatrix}.$$ It is fairly easy to get numerical values for the probability distribution of being in state $6$ after $n$ steps, but a closed form solution appears difficult.



real analysis - How to Prove something is differentiable


I know how differentiability is defined in terms of continuity of the function $(f(x) - f(x_0)) / (x - x_0) $ as $x$ goes to $x_0$, but I was wondering if there are other useful theorems / lemmas I could use to show a function is differentiable?


Note: I am aware of the technique that if I can express my function in terms of a sum/product/quotient of functions that I know are differentiable, then I can just use the product rule, etc. to find the derivatives on top of showing that the function is differentiable.


But are there other lemmas or theorems that are also helpful? (For example, an equivalent definition of continuity is that preimages of open sets are open)


Answer



There are several theorems that you did not mention:


  • if $f$ and $g$ are differentiable, then $g\circ f$ is differentiable too (and $(g\circ f)'=(g'\circ f)\times f'$);

  • if $f$ is invertible and $f'$ is never $0$, then $f^{-1}$ is differentiable too (and $(f^{-1})'=\frac1{f'\circ f^{-1}}$);


  • if $(f_n)_{n\in\mathbb N}$ is a sequence of differentiable functions wich converges pointwise to a function $f$ and if the sequence $(f_n')_{n\in\mathbb N}$ converges uniformly to a function $g$, then $f$ is differentiable (and $f'=g$);

  • if $f$ is continuous and $F(x)=\int_a^xf(t)\,\mathrm dt$, then $f$ is differentiable (and $F'=f$).

linear algebra - Is this circular reasoning, when trying to prove that similar matrices have the same eigenvalues and characteristic polynomial?



I am trying to prove that similar nxn matrices A and B share the same eigenvalues and characteristic polynomial.



I am not sure how to start, but I am tempted to start with something like



$$tr(A) = tr(B)$$




and



$$det(A) = det(B)$$



since we know that the matrices are similar.



But, this approach sort of seems to be using what I am trying to prove.



Is this a valid approach? Or, should I head in another direction instead?




Thanks,


Answer



If $A$ is similar to $B$ then there exists invertible $P$ such that $A=P^{-1}BP$.



We can rewrite this as $PA=BP$.



Suppose $\lambda$ is an eigenvalue of $A$ with eignevector $v$. Then $$PAv=P(\lambda v)=\lambda Pv=BPv$$



$\therefore\lambda$ is an eigenvalue of $B$ with eigenvector $Pv$.




Switching the roles of $A$ and $B$ gives you that an eigenvalue of $B$ is an eigenvalue of $A$.


integration - How to evaluate $lim_{xto +infty} int_x^{x^3} frac{dt}{(ln(t))^2}$?



I'm searching $\lim_{x\to +\infty} \int_x^{x^3} \frac{dt}{(ln(t))^2}$, but I'm stuck.




I've tried to do a change of variable in order to get $u\to 0$ and then use a Taylor expansion... But nothing works.


Answer



Since $\log t$ is an increasing function, we have:
$$ I(x) = \int_{x}^{x^3}\frac{dt}{\log^2 t} \geq \frac{x^3-x}{\log^2(x^3)}$$
so, simply,
$$ \lim_{x\to +\infty} I(x) = +\infty.$$


Tuesday, December 26, 2017

number theory - Find integer solutions to linear congruence



Find all integer solutions of $2n \equiv 12 \bmod 19$


So I have re-arranged to: $2x-19y=12$ and by the extended Euclidean Algorithm, I get $$x=1 \ $$ $$y=-9$$


However, this is how far I was able to get to and not sure what follow past this point? What exactly are we looking for?


Answer



I would say there are infinitely many. Another way to think of $2n \equiv 12 \pmod{19}$ is that $2n - 12 = 19k$ for some $k \in \mathbb{Z}$. So $n = \frac{19k + 12}{2}$. Thus any even $k$ will produce an integer solution.


EDIT: It might be better to think in terms of equivalence classes for the sake of negative $k$ values. So maybe $[n] = \bigl[\frac{19k + 12}{2}\bigr]$ in $\mathbb{Z}_{19}$. For example: If $k = -2$, then $[n]_{19} = [-13]_{19} = [6]_{19}$.


combinatorics - Simple expression for this sum?



Is there any simple expression for the sum:
$$
S = \sum_{n = 0}^{N-1} \frac{1}{a + e^{2 \pi i n / N}}
$$
where $ N $ is a positive integer and $ a $ is some real number. It feels to me like there should be, but I am unable to find it. Similarly, any formula for
$$
P = \prod_{n=0}^{N-1} \left( a + e^{2 \pi i n / N} \right)
$$

would be useful since $ S = \frac{d}{da} \log P $.


Answer



Yes. The roots of $x^N - 1$ are the $N$-th roots of unity, so $$x^N-1 = \prod_{n=0}^{N-1} (x - e^{2 \pi i n / N}).$$ To get the plus sign, replace $x$ with $-x$: $$(-x)^N -1 = \prod_{n=0}^{N-1} (-x - e^{2 \pi i n / N}) = (-1)^n\prod_{n=0}^{N-1} (x + e^{2 \pi i n / N}) .$$ This means $P = a^n -1$ if $n$ is even, or $a^n +1$ if $n$ is odd.


sequences and series - Problem about sum of arithemtic progression and geometric progression



Question: An arithmetic sequence has a common difference of $1$ and a geometric sequence has a common ratio of $3$. A new sequence is formed by adding corresponding terms of these two progressions. It is given that the second and fourth term of the sequence are $12$ and $86$ respectively. Find, in terms of $n$,




  1. the $n^{th}$ term,

  2. the sum of the first $n$ terms



of the new sequence.




Answers:




  1. $n+1+3^n$.

  2. $\frac 1 2 n(n+3) + \frac{3}{2} 3^n - \frac{3}{2}$.



Working so far




I am stuck getting different values for $a$.



enter image description here


Answer



The problem seems to be that you assume that the first term in both sequences is $a$.



Correct equations would be
$$u_2=a+1+3b=12$$
$$u_4=a+3+27b=86$$
where $a$ denotes the initial term of the arithmetic progression and $b$ is the initial term of the geometric progression.




I guess you can take it from here. (I got b=3 and a=2.)


real analysis - Can it be that $f$ and $g$ are everywhere continuous but nowhere differentiable but that $f circ g$ is differentiable?

So, I was just asking myself can something like this happen? I was thinking about some everywhere continuous but nowhere differentiable functions $f$ and $g$ and the natural question arose on can the composition $f \circ g$ be differentiable, in other words, can the operation of composition somehow "smoothen" the irregularities of $f$ and $g$ which make them non-differentiable in such a way that composition becomes differentiable?



So here is the question again:




Suppose that $f$ and $g$ are everywhere continuous but nowhere differentiable functions. Can $f \circ g$ be differentiable?





If such an example exists it would be interesting because the rule $(f(g(x))'=f'(g(x)) \cdot g'(x)$ would not hold, and not only that it would not hold, it would not make any sense because $f$ and $g$ are not differentiable.

Monday, December 25, 2017

elementary number theory - Prove that $n(n^2 - 1)(n + 2)$ is divisible by $4$ for any integer $n$


Prove that $n(n^2 - 1)(n + 2)$ is divisible by $4$ for any integer $n$





I can not understand how to prove it. Please help me.

algebra precalculus - Solve using complex analysis?




$\cos(A-B) + \cos(B-C) + \cos(C-A) = \frac{-3}{2}$
We need to prove that $\cos A + \cos B + \cos C = \sin A + \sin B + \sin C = 0$



I was wondering if it's possible to prove this result by showing that the real and imaginary parts of $z = \cos A + \cos B + \cos C$ are equal to zero, somehow invoking Vieta's or De Moivre's theorem if required. I tried, starting with $\cos(B-C)$ and other cyclic terms but couldn't really get anywhere.



Any other method is also appreciated.



Thanks a lot!


Answer



Let $u$, $v$, $w$ be $e^{iA}$, $e^{iB}$ and $e^{iC}$.

Then
$$(u+v+w)(u^{-1}+v^{-1}+w^{-1})
=2\cos(A-B)+2\cos(B-C)+2\cos(C-A)+3.$$
Your condition is equivalent to $(u+v+w)(u^{-1}+v^{-1}+w^{-1})=0$.
These brackets are complex conjugates, so $u+v+w=0$
and this means
$$(\cos A+\cos B+\cos C)+i(\sin A+\sin B+\sin C)=0.$$


modular arithmetic - Is it possible to solve this congruency system?

I need help to determine if this congruency system can be solved and if it can be solved how do I do it:
$$\begin{cases}x\equiv2\text{ (mod $3$)}\\
x\equiv4\text{ (mod $6$)}\\

\end{cases}$$



I do know that from the system I obtain the following:



$$\begin{align}
x\equiv2\text{ (mod $3$)}\\
x\equiv4\text{ (mod $2$)}\\
x\equiv4\text{ (mod $3$)}\\
\end{align}$$




I do not know what to conclude from here. I think this system doesn't have solution, but if it is so how do I prove it.

probability - Expectetion of $Y^{alpha}$ with $alpha >0$


Let $Y$ be a positive random variable. For $\alpha>0$ show that


$E(Y^{\alpha})=\alpha \int_{0}^{\infty}t^{\alpha -1}P(Y>t)dt$.


My ideas:


$E(Y^{\alpha})= \int_{-\infty}^{\infty}t^{\alpha}f_{Y}(t)dt$



=$\int_{0}^{\infty}t^{\alpha}f_{Y}(t)dt$


=$\int_{0}^{\infty}(\int_{0}^{t^{\alpha}}dy)f_{Y}(t)dt$


Answer



$E(Y^\alpha)=\int_0^\infty t^\alpha f_y(t)dt$. Let $G_y(t)=P(Y\gt t)=1-F_y(t)$. Therefore $G'_y(t)=-f_y(t)$. Integrate $E(Y^\alpha)$ by parts and get $E(Y^\alpha)=-t^\alpha G_y(t)\rbrack_0^\infty +\alpha \int_0^\infty t^{\alpha-1}G_y(t)dt={\alpha \int_0^\infty t^{\alpha-1}P(Y\gt t)dt}$.


Sunday, December 24, 2017

summation - Find the value of the sum $frac{1}{1000*1998}+frac{1}{1001*1997}+cdots+frac{1}{1998*1000}$

Find the value of the sum $\frac{1}{1000*1998}+\frac{1}{1001*1997}+\cdots+\frac{1}{1998*1000}$



I attempted expressing it as a telescoping sum, but I don't know how to.
Also, I was curious, is there a closed formula for the general sum

$\frac{1}{xy}+\frac{1}{(x+1)(y-1)}+\cdots+\frac{1}{yx}$, for positive integers $x

induction - Prove that $a^{2n} -b^{2n}$ is divisible by $a+b$





Prove that for all $n\in\Bbb{Z}$, $a^{2n}-b^{2n}$ is divisible by $a+b$ using induction.







I know that if $a$ is divisible by $b$, then $a=kb$, where $k\in\Bbb{Z}$. Here we have that $a^{2n}-b^{2n}=(a+b)k$, with $k\in\Bbb{Z}$.




For the base case I set $n=1$, so $a^2-b^2=(a+b)(a-b)=(a+b)k$, where $k=a-b\in\Bbb{Z}$.



Now the inductive step (where I have doubts): $$a^{2n}-b^{2n}=(a+b)k\implies a^{2(n+1)}-b^{2(n+1)}=(a+b)m,\;k,m\in\Bbb{Z}.$$ We start from $a^{2(n+1)}-b^{2(n+1)}$. Then $$a^{2n+2}-b^{2n+2}=(a+b)\color{red}{(a^{2n+1}-a^{2n}b+a^{2n-1}b^2-\dots-a^2b^{2n-1}-ab^{2n}+b^{2n+1})},$$ so $a^{2(n+1)}-b^{2(n+1)}=(a+b)m$, where $m=a^{2n+1}-a^{2n}b+a^{2n-1}b^2-\dots-a^2b^{2n-1}-ab^{2n}+b^{2n+1}\in\Bbb{Z}.\qquad\square$






I have two questions:




  1. Is the math in $\color{red}{\text{red}}$ a correct descomposition of $a^{2(n+1)}-b^{2(n+1)}$?


  2. We have not used the inductive hypothesis. Could we use it?


Answer



The descomposition in red is correct. You did not use it because you could try this without induction, just with the factorization you used above. But you could have used it in the following way:



Since $$\begin{align}
a^{2n+2}-b^{2n+2}=& a^{2n+2}-a^2b^{2n}+a^2b^{2n}-b^{2n+2} \\
=& a^2 (a^{2n}-b^{2n})+b^{2n}(a^2-b^2) \\
=& a^2 (a+b)k+b^{2n}(a-b)(a+b) \\
=&(a+b)\cdots

\end{align}$$

The assert is even true for $n+1$.


circles - contradicting PI=4 fallacy.




I know that you can take area out of a square without changing it's perimeter. Now, here's this problem:


Draw a circle with dia = 1;



Draw a square around with perimeter = 4, side = 1;


Remove the corners, still perimeter is same.


repeat this thing infinite times, you have to see the image below.


http://i.stack.imgur.com/qUYei.jpg


I know there is something missing, but what is it? I am not a mathematician, not even the worst one. Please be kind and answer in English as much as possible. Thanks

Saturday, December 23, 2017

real analysis - How to evaluate the following limit




How can I determine the following limit$$\lim_{n \to \infty} \frac{-\ln(n)}{n^x}$$ where $x \in [2, \infty)$



WolframAlpha tells me the limit is $0$, but I am not sure how to go about calculating it manually, I suspect L'Hospital plays a role?


Answer



From L'Hospital's Rule, we have



$$\lim_{n\to \infty}\frac{-\ln n}{n^x}=\lim_{n\to \infty}\frac{-1/n}{xn^{x-1}}=-\frac1x\lim_{n\to \infty}\frac{1}{n^x}=0$$



when $x>0$



Friday, December 22, 2017

real analysis - If $fcolon mathbb{R} to mathbb{R}$ is such that $f (x + y) = f (x) f (y)$ and continuous at $0$, then continuous everywhere




Prove that if $f\colon\mathbb{R}\to\mathbb{R}$ is such that $f(x+y)=f(x)f(y)$ for all $x,y$, and $f$ is continuous at $0$, then it is continuous everywhere.





If there exists $c \in \mathbb{R}$ such that $f(c) = 0$, then
$$f(x + c) = f(x)f(c) = 0.$$
As every real number $y$ can be written as $y = x + c$ for some real $x$, this function is either everywhere zero or nowhere zero. The latter case is the interesting one. So let's consider the case that $f$ is not the constant function $f = 0$.



To prove continuity in this case, note that for any $x \in \mathbb{R}$
$$f(x) = f(x + 0) = f(x)f(0) \implies f(0) = 1.$$



Continuity at $0$ tells us that given any $\varepsilon_0 > 0$, we can find $\delta_0 > 0$ such that $|x| < \delta_0$ implies
$$|f(x) - 1| < \varepsilon_0.$$




Okay, so let $c \in \mathbb{R}$ be fixed arbitrarily (recall that $f(c)$ is nonzero). Let $\varepsilon > 0$. By continuity of $f$ at $0$, we can choose $\delta > 0$ such that
$$|x - c| < \delta\implies |f(x - c) - 1| < \frac{\varepsilon}{|f(c)|}.$$



Now notice that for all $x$ such that $|x - c| < \delta$, we have
$$\begin{align*}
|f(x) - f(c)| &= |f(x - c + c) - f(c)|\\
&= |f(x - c)f(c) - f(c)|\\
&= |f(c)| |f(x - c) - 1|\\
&\lt |f(c)| \frac{\varepsilon}{|f(c)|}\\
&= \varepsilon.

\end{align*}$$
Hence $f$ is continuous at $c$. Since $c$ was arbitrary, $f$ is continuous on all of $\mathbb{R}$.



Is my procedure correct?


Answer



One easier thing to do is to notice that $f(x)=(f(x/2))^2$ so $f$ is positive, and assume that it is never zero, since then the function is identically zero. Then you can define $g(x)=\ln f(x)$ and this function $g$ will satisfy the Cauchy functional equation
$$ g(x+y)=g(x)+g(y)$$
and the theory for this functional equation is well known, and it is easy to see that $g$ is continuous if and only if it is continuous at $0$.


Why can't erf be expressed in terms of elementary functions?



I have seen this claim on Wikipedia and other places. Which branch of mathematics does this result come from?


Answer



Differential Galois theory.


probability distributions - Inequality for $N(0,1)$ CDF: $|log F(v)|leq |log F(0)|+|v|+|v|^2$




Suppose that $F$ is the CDF of a standard normal distribution. Hayashi (2000) claims that the following is true
$$
|\log F(v)|\leq |\log F(0)|+|v|+|v|^2\quad\text{for all}\quad v.
$$
How does one prove something like this please? I think it looks like it's based on a Taylor approximation but what has happened to the error term?





Edit 2: I have plotted
$$
|\log F(0)|+|v|+|v|^2-|\log F(v)|
$$
for $v\in[-25,25]$. It looks something like this:



enter image description here


Answer



$$F(v)=\frac{1}{\sqrt{2\pi}}\int^{v}_{-\infty}\exp(-\frac{x^2}{2})\,dx$$
$$F(0)=\frac{1}{2}$$

the inequality is equivalent to
$$|\log F(v)|-|\log(\frac{1}{2})|\leq|v|+|v|^2\Leftrightarrow \log\frac{1}{F(v)}-\log2\leq|v|+|v|^2\Leftrightarrow\log\frac{1}{2F(v)}\leq|v|+|v|^2$$
exponentiating both sides yields
$$\frac{1}{2F(v)}\leq\exp(|v|+|v|^2)\Leftrightarrow 2F(v)\geq \exp(-|v|-|v|^2)$$
Consider the function
$$G(v)=2F(v)-\exp(-|v|-|v|^2)=2F(v)-\exp(-|v|-v^2)$$
then the problem is equivalent to showing that
$$G(v)\geq0$$
for all $v\in\mathbb{R}$.
First consider $v\geq0$ then we have

$$G(v)=2F(v)-\exp(-v-v^2)\Rightarrow G'(v)=2F'(v)+(1+2v)\exp(-v-v^2)$$$$=\frac{2}{\sqrt{2\pi}}\exp(-\frac{v^2}{2})+(1+2v)\exp(-v-v^2)$$
$$=\exp(-\frac{v^2}{2})[\frac{2}{\sqrt{2\pi}}+(1+2v)\exp(-v-\frac{v^2}{2})]\geq0$$
for all $v\geq0$. Therefore
$$G(v)\geq G(0)=0$$
for all $v\in [0,\infty)$.
Secondly let $v<0$ then we obtain
$$G(v)=2F(v)-\exp(v-v^2)\Rightarrow \frac{dG}{dv}=2\frac{dF}{dv}-(1-2v)\exp(v-v^2)$$
$$=\exp(-\frac{v^2}{2})[\frac{2}{\sqrt{2\pi}}-(1-2v)\exp(v-\frac{v^2}{2})]$$
But the function
$$g(v)=\frac{2}{\sqrt{2\pi}}-(1-2v)\exp(v-\frac{v^2}{2})\geq g(\frac{-3-\sqrt{17}}{4})>0$$

for all $v\in (-\infty,0)$. Therefore
$$\frac{dG}{dv}=\exp(-\frac{v^2}{2})\cdot g(v)>0$$
for all $v\in (-\infty,0)$. Since the derivative is positive in this domain then it must be true that
$$G(v)\geq \lim_{v\to-\infty}G(v)=\lim_{v\to-\infty}\{2F(v)-\exp(v-v^2)\}=0$$
for all $v\in (-\infty,0)$.


Thursday, December 21, 2017

probability theory - If $Z=X$ on $A$ and $Z=Y$ on $A^c$ then $Z$ is a random variable




Let $X$ and $Y$ be random random variables and let $A \in \mathcal{B}$. Prove that the function $Z$ defined by
$$Z(\omega)=\begin{cases}
X(\omega),& \text{ if } \omega \in A \\

Y(\omega),& \text{ if } \omega \in A^{c}
\end{cases}$$
is a random variable




Proof so far:
$$Z^{-1}((-\infty,a])=\{\omega:Z(\omega)\geq a\}=\{\omega: Z(\omega)\geq a, \omega \in A\}\cup\{\omega: Z(\omega)\leq a, \omega \in A^{c}\}=Y^{-1}[a,\infty) \cup X^{-1}([a,\infty))$$
So $Z$ is measurable


Answer



Let $X, Y$ be random variables in $(\Omega, \mathcal B, \mathbb P)$.




If $A \in \mathcal B$, then $1_A$ and $1_{A^C}$ are random variables.



Note that



$$Z = X1_A + Y1_{A^C}$$



Since sums or products of random variables in $(\Omega, \mathcal B, \mathbb P)$ are random variables in $(\Omega, \mathcal B, \mathbb P)$, $Z$ is a random variable in $(\Omega, \mathcal B, \mathbb P)$.







As for your proof, I think you should say:




  1. $\forall a \in \mathbb R$


  2. have $Z \ge a$ instead of $Z \le a$


  3. $Z$ is $\mathcal B$-measurable



real analysis - $lim_{pto infty}Vert fVert_{p}=Vert fVert_{infty}$?








On the $L_p$ spaces, when is $$\lim_{p\to \infty}\| f\|_{p}=\| f\|_{\infty}$$ true? Or what condition is necessary for this?

real analysis - $0^0$ -- indeterminate, or $1$?


One of my teachers argued today that 0^0 = 1. However, WolframAlpha, intuition(?) and various other sources say otherwise... 0^0 doesn't really "mean" anything..


can anyone clear this up with some rigorous explanation?


Answer



Short answer: It depends on your convention and how you define exponents.



Long answer: There are a number of ways of defining exponents. Usually these definitions coincide, but this is not so for $0^0$: some definitions yield $0^0=1$ and some don't apply when both numbers are zero (leaving $0^0$ undefined).


For example, given nonnegative whole numbers $m$ and $n$, we can define $m^n$ to be the number of functions $A \to B$, where $A$ is a set of size $n$ and $B$ is a set of size $m$. This definition gives $0^0=1$ because the only set of size $0$ is the empty set $\varnothing$, and the only function $\varnothing \to \varnothing$ is the empty function.


However, an analyst might not want $0^0$ to be defined. Why? Becuase look at the limits of the following functions: $$\lim_{x \to 0^+} 0^x = 0, \qquad \lim_{x \to 0} x^0 = 1, \qquad \lim_{x \to 0^+} (e^{-1/t^2})^{-t} = \infty$$ All three limits look like $0^0$. So when this is desired, you might want to leave $0^0$ undefined, so that it's a lack of definition rather than a discontinuity.


Typically this is resolved by:


  • If you're in a discrete setting, e.g. considering sets, graphs, integers, and so on, then you should take $0^0=1$.

  • If you're in a continuous setting, e.g. considering functions on the real line or complex plane, then you should take $0^0$ to be undefined.

Sometimes these situations overlap. For example, usually when you define functions by infinite series $$f(x) = \sum_{n=0}^{\infty} a_nx^n$$ problems occur when you want to know the value of $f(0)$. It is normal in these cases to take $0^0=1$, so that $f(0)=a_0$; the reason being that we're considering what happens as $x \to 0$, and this corresponds with $\lim_{x \to 0} x^0 = 1$.


calculus - Indeterminate form $0^0$ using L'hospitals rule when calculating $lim_{xto0^+} x^{sin(x)}$


Given the question $$\lim_{x\to0^+} x^{\sin(x)}$$



I have deducted so far that this has the indeterminate form $0^0$ so I have taken the natural logarithm of both sides to give me: $$\lim_{x\to0^+} \ln(y) = \lim_{x\to0^+}\sin(x)\ln(x)$$ This still has an indeterminate form so I rearrange it to start using L'hospitals rule: $$\lim_{x\to0^+}\frac{\ln(x)}{\csc(x)}$$ $$=\lim_{x\to0^+}\frac{\frac1x}{-\csc(x)\cot(x)}$$ $$=\lim_{x\to0^+}\frac{-\sin(x)\tan(x)}{x}$$ $$=\lim_{x\to0^+}\frac{-\cos(x)\sec^2(x)}{1} = -1$$ $$\lim_{x\to0^+}x^{\sin(x)} = \frac1e$$


I don't think this is correct, am I doing something wrong, and if so, where?


Answer




$$\lim_{x\to0^+} x^{\sin(x)}$$



$$=\lim_{x\to0^+} \exp(\ln(x)\sin(x))$$


$$=\exp(\lim_{x\to0^+} \ln(x)\sin(x))$$


$$=\exp(\lim_{x\to0^+} \frac{\ln(x)}{1/\sin(x)})$$


Applying L'Hôpital's rule:



$$=\exp(\lim_{x\to0^+} -\frac{\sin(x)^2}{x\cos(x)})$$


$$=\exp(-1\cdot \lim_{x\to0^+} \frac{1}{\cos(x)}\lim_{x\to0^+} \frac{\sin(x)^2}{x})$$


$$=\exp(-\lim_{x\to0^+} \frac{\sin(x)^2}{x})$$


Applying L'Hôpital's rule:


$$=\exp(-\lim_{x\to0^+} 2\cos(x)\sin(x))$$


$$\color{grey}{e^0=1}$$


$$=\boxed {\color{blue}1}$$


Wednesday, December 20, 2017

calculus - What is wrong with treating $dfrac {dy}{dx}$ as a fraction?





If you think about the limit definition of the derivative, $dy$ represents $$\lim_{h\rightarrow 0}\dfrac {f(x+h)-f(x)}{h}$$, and $dx$ represents



$$\lim_{h\rightarrow 0}$$
. So you have a $\;\;$$\dfrac {number}{another\; number}=a fraction$, so why can't you treat it as one? Thanks! (by the way if possible please keep the answers at a calc AB level)


Answer



The derivative, when it exists, is a real number (I'm restricting here to real values functions only for simplicity). Not every real number is a fraction (i.e., $\pi$ is not a fraction), but every real number is trivially a quotient of two real numbers (namely, $x=\frac{x}{1}$). So, in which sense is the derivative a fraction? answer: it's not. And now, in which sense is the derivative a quotient to two numbers? Ahhh, let's try to answer that then: By definition $f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$. Well, that is not a quotient of two numbers, but rather it is a limit. A limit, when it exists, is a number. This particular limit is a limit of quotients of a particular form (still, not of fractions in general, but quotients of real numbers).




The meaning of the derivative $f'(x)$ is the instantaneous rate of change of the value of $f$ at the point $x$. It is defined as a certain limit. If you now intuitively think of $h$ as an infinitesimal number (warning: infinitesimals do not exist in $\mathbb R$, but they exist in certain extensions of the reals) then you can consider the single expression $\frac{f(x+h)-f(x)}{h}$. In systems where infinitesimals really do exist one can show that this single expression, when the derivative exists, is actually infinitesimally close to the actural derivative $f'(x)$. That is, when $h\ne 0$ is infinitesimal, $f'(x)-\frac{f(x+h)-f(x)}{h}$ is itself infinitesimal. One can them compute with this expression as if it were the derivative (with some care). This can be done informally, and to some extend this is how the creators of early calculus (prior to Cauchy) argued, or it can be done rigorously using any one of a number of different techniques to introduce infinitesimals into calculus. However, getting infinitesimals into the picture comes with a price. There are logical/set-theoretical issues with such models rendering all of them not very explicit.


abstract algebra - Simple proof for independence of square roots of two primes


Consider the following problem:



Let $p$ and $q$ be two distinct prime numbers. Show that $\sqrt{p}$ and $\sqrt{q}$ are independent over $\mathbb{Q}$, which means that:


$\forall a,b \in \mathbb{Q}: a\sqrt{p} + b\sqrt{q} = 0 \Rightarrow a = b = 0$




I'm well aware how to prove this for a sequence $p_i$ of primes and thus a sequence $\sqrt{p_i}$ of prime roots using Galois theory, but I want to show some students a very elemental and short proof for just two prime roots. Those students are only at the beginning of an elemental algebra course and did not learn about anything like field traces. Is this possible?


Answer



I wanted to construct a proof of this using as elementary means as possible, avoiding if at all feasible "big gun" results such as the fundamental theorem of arithmetic, which in the following has been supplanted by repeated application of Bezout's identity:


If $\sqrt p$ and $\sqrt q$ are dependent over $\Bbb Q$, they satisfy a relation of the form


$r\sqrt p + s\sqrt q = 0, \; 0 \ne r, s \in \Bbb Q; \tag 0$


by clearing the denominators of $r$ and $s$ we find there exist $0 \ne a, b \in \Bbb Z$ with


$a\sqrt p + b\sqrt q = 0, \tag 1$


and we may clearly further assume


$\gcd(a, b) = 1; \tag 2$



from (1) we have, upon multiplication by $\sqrt p$,


$ap + b\sqrt{pq} = 0, \tag 3$


whence


$ap = -b\sqrt{pq}; \tag 4$


we square:


$a^2 p^2 = b^2 pq, \tag 5$


and divide through by $p$:


$a^2 p = b^2 q \Longrightarrow p \mid b^2 q; \tag 6$


now since $p, q \in \Bbb P$ are distinct, $p \ne q$, we have


$\gcd(p, q) = 1, \tag 7$



and thus


$\exists x, y \in \Bbb Z, \; xp + yq = 1, \tag 8$


which further implies


$xpb^2 + yqb^2 = b^2 \Longrightarrow p \mid b^2, \tag 9$


since


$p \mid pb^2, \; p \mid qb^2; \tag{10}$


now with $p \in \Bbb P$,


$p \not \mid b \Longrightarrow \gcd(p, b) = 1, \tag{11}$


whence


$\exists z, w \in \Bbb Z, \; zp + wb = 1, \tag{12}$



and so


$zpb + wb^2 = b \Longrightarrow p \mid b \Rightarrow \Leftarrow p \not \mid b, \tag{13}$


as assumed in (11); thus in fact


$p \mid b \Longrightarrow \exists c \in \Bbb Z, \; b = pc \Longrightarrow b^2 = p^2c^2, \tag{14}$


and thus (6) becomes


$a^2 p = c^2p^2 q \Longrightarrow a^2 = c^2pq \Longrightarrow p \mid a^2; \tag{15}$


now repeating in essence the argument of (11)-(13) proves that $p \mid a$, which is of course precluded by (2), lest $p \mid \gcd(a, b) = 1$.


We thus see that there can be no relation of the form (0) for $p, q \in \Bbb P$ distinct; $p$ and $q$ are independent over $\Bbb Q$.


The informed reader, upon careful scrutiny, will note that this demonstration also has much in common with the classic proof that $\sqrt 2 \notin \Bbb Q$, which truth be told inspired my conception of this answer.


elementary number theory - Modular Inverses



I'm doing a question that states to find the inverse of $19 \pmod {141}$.



So far this is what I have:




Since $\gcd(19,141) = 1$, an inverse exists to we can use the Euclidean algorithm to solve for it.



$$
141 = 19\cdot 7 + 8
$$
$$
19 = 8\cdot 2 + 3
$$
$$

8 = 3\cdot 2 + 2
$$
$$
3 = 2\cdot 1 + 1
$$
$$
2 = 2\cdot 1
$$
The textbook says that the answer is 52 but I have no idea how they got the answer and am not sure if I'm on the right track. An explanation would be appreciated! Thanks!


Answer




You're on the right track; as mixedmath says, you now have to "backtrack" your steps like this:
$$\begin{eqnarray}1\;\;\;\;&&=3-2\cdot 1\\
1=&3-(8-3\cdot 2)\cdot 1&=3\cdot 3-8\\
1=&(19-8\cdot 2)\cdot 3-8&=19\cdot 3-8\cdot 7\\
1=&19\cdot 3-(141-19\cdot 7)\cdot 7&=19\cdot52-141\cdot 7\end{eqnarray}$$
This last equation, when taken modulo 141, gives $1\equiv 19\cdot 52\bmod 141$, demonstrating that 52 is the inverse of 19 modulo 141.


modular arithmetic - How to calculate $5^{2003}$ mod $13$


How to calculate $5^{2003}$ mod $13$




using fermats little theorem




5^13-1 1 mod 13



(5^12)^166+11 mod 13



a+b modn=(a modn + b modn) modn



(1+11mod13)mod13



12 mod 13 = 12




why answer is 8 ?



how do we calculate this



thanks

What 's the short proof that for square matrices $AB = I$ implies $BA = I$?








I'm trying to remember the one line proof that for square matrices $AB = I$ implies $BA = I$.
I think it uses only elementary matrix properties and nothing else. Does anyone know the proof?



I remember it beginning with $BAB = B$ and the result following almost immediately.

abstract algebra - The square roots of different primes are linearly independent over the field of rationals



I need to find a way of proving that the square roots of a finite set
of different primes are linearly independent over the field of
rationals.




I've tried to solve the problem using elementary algebra
and also using the theory of field extensions, without success. To
prove linear independence of two primes is easy but then my problems
arise. I would be very thankful for an answer to this question.

Tuesday, December 19, 2017

Series with product of different power

I would like to know the analytical expression for the following series (if it exists):




$\sum_{n=0}^{\infty} \frac{1}{n!} a^n b^{n^2}$



Does anybody have a clue on how to proceed?

integration - Improper integral $int_{0}^{infty}left(frac{1}{sqrt{x^2+4}}-frac{k}{x+2}right)text dx$




Given improper integral $$\int \limits_{0}^{\infty}\left(\frac{1}{\sqrt{x^2+4}}-\frac{k}{x+2}\right)\text dx \, ,$$
there exists $k$ that makes this integral convergent.
Find its integration value.




Choices are $\ln 2$, $\ln 3$, $\ln 4$, and $\ln 5$.







I've written every information from the problem.
Yet I'm not sure whether I should find the integration value from the given integral or $k$.



What I've tried so far is,
$\int_{0}^{\infty} \frac{1}{\sqrt{x^2+4}} \, \text dx= \left[\sinh^{-1}{\frac{x}{2}}\right]_{0}^{\infty}$



How should I proceed?


Answer



We have that for $x\to \infty$



$$\frac{1}{\sqrt{x^2+4}}=\frac1x(1+4/x^2)^{-1/2}\sim \frac1x-\frac2{x^3}$$




and



$$\frac k {x+2}\sim \frac k x$$



therefore in order to have convergence we need $k=1$ in such way that the $\frac1x$ term cancels out.



Then we need to solve and evaluate



$$\int_{0}^{\infty}\left (\frac{1}{\sqrt{x^2+4}}-\frac{1}{x+2}\right)\text dx=\left[\sinh^{-1}\frac x 2 -\log (x+2)\right]_{0}^{\infty}$$




and to evaluate the value at $\infty$ recall that



$$\sinh^{-1}\frac x 2=\log \left(\frac x 2 + \sqrt{\frac{x^2}{4}+1}\right) \sim \log x$$


Monday, December 18, 2017

Find the sum of the infinite series $frac{1}{1cdot 2}+frac{1cdot3}{1cdot2cdot3cdot4}+frac{1cdot3cdot5}{1cdot2cdot3cdot4cdot5cdot6}+...$


Find the sum of the series $\frac{1}{1\cdot 2}+\frac{1\cdot3}{1\cdot2\cdot3\cdot4}+\frac{1\cdot3\cdot5}{1\cdot2\cdot3\cdot4\cdot5\cdot6}+...$. This type of questions generally require a trick or something and i am not able to figure that out. My guess is that it has something to do with exponential series or binomial series. Any help?


Answer



Sorry guys, got it.
$\frac{1}{1\cdot 2}+\frac{1\cdot3}{1\cdot2\cdot3\cdot4}+\frac{1\cdot3\cdot5}{1\cdot2\cdot3\cdot4\cdot5\cdot6}+...=\frac{1}{2}\cdot\frac{1}{1!}+\frac{1}{2^2}\cdot\frac{1}{2!}+\frac{1}{2^3}\cdot\frac{1}{3!}+... = e^\frac{1}{2}-1.$
The first equality holds after cancelling the common terms in the numerator and denominator


elementary number theory - Modular Arithmetic in AMC 2010 10A #24



Link to the problem/solution:
https://artofproblemsolving.com/wiki/index.php/2010_AMC_10A_Problems/Problem_24



I'm trying to understand the following step in the solution to the aforementioned AMC problem.



Given:





  1. $M = \cfrac{90!}{5^{21}}$

  2. $N = \cfrac{90!}{10^{21}} = \cfrac{M}{2^{21}}$

  3. $M \equiv 24 \bmod 25$

  4. $2^{21} \equiv 2 \bmod 25$



Then:



$N \bmod 25 = \cfrac{24}{2} = 12$




In the above approach, we write N as a fraction, find the modulo-25 residue of the numerator and denominator, and divide the residues to find the modulo-25 residue of N.



I recently finished working through the Art of Problem Solving textbook "Introduction to Number Theory", so I'm familiar with the basics of solving modular congruence equations and modular arithmetic, but I was confused by this step. I was wondering: under what conditions can we divide modular congruent statements, and why does this step work? My understanding was that addition, multiplication, and exponentiation work with modular congruent statements, but I never saw an example in the book of dividing statements as in the above case.



For instance:



$98 \equiv 23 \bmod 25$



$49 \equiv 24 \bmod 25$




$2 \equiv ? \bmod 25$



In this example, I don't think division produces a meaningful result (perhaps it does, but I don't know how to evaluate 23/24?). Why does division fail here, but work above? (EDIT: -2 / -1 = 2, so I guess it works in this case too!)



My suspicion is that I'm missing something really basic here, perhaps even misinterpreting what the solution is doing. If you have any book recommendations that would help me improve and solidify my understanding of modular arithmetic/modular congruence, I'd greatly appreciate it!


Answer



In modulo arithmetic, "division" means multiplying by the multiplicative inverse, e.g., $b = \frac{1}{a}$ means the value which when multiplied by $a$ gives $1$ modulo the value, e.g., $ba \equiv 1 \pmod n$. Note you may sometimes see $b = a^{-1}$ instead to avoid using explicit "division". This works, and gives a unique value, in any cases where the value you're dividing and the modulus are relatively prime.



More generally, it'll work in all cases of $\frac{c}{a} \equiv e \pmod n$ where $d = \gcd(a,n)$ and $d \mid c$ since this gcd value "cancels" in the division. Thus, the resulting equivalent modulo equation of $\frac{c'}{a'} \equiv e \pmod n$, where $c' = \frac{c}{d}$ and $a' = \frac{a}{d}$ has $\gcd(a',n) = 1$, has a solution. However, as Bill Dubuque's comment indicates, this assumes you're doing integer division to the extent of removing the common factor of $d$. Note that $a^{-1}$ doesn't exist modulo $n$ in this case. However, $(a')^{-1}$ does exist modulo $\frac{n}{d}$, so a possible interpretation would be $\frac{c'}{a'} \equiv c'(a')^{-1} \equiv e \pmod{\frac{n}{d}}$.




As for why the multiplicative inverse $b = a^{-1}$ exists modulo $n$ if $\gcd(a,n) = 1$, Bézout's identity states that in such cases there exist integers $x$ and $y$ such that



$$ax + ny = 1 \tag{1}\label{eq1}$$



As such $ax \equiv 1 \pmod n$ so $x \equiv a^{-1} = b \pmod n$. This value must be unique, modulo $n$, because if there was another value $x'$ such that $xa \equiv x'a \equiv 1 \pmod n$, then $(x - x')a \equiv 0 \pmod n$. Since $\gcd(a,n) = 1$, this means that $x - x' \equiv 0 \pmod n \; \iff \; x \equiv x' \pmod n$.



Bézout's identity also shows that if $a$ and $n$ are not relatively prime, e.g., $\gcd(a,n) = d \gt 1$, then \eqref{eq1} becomes



$$ax + ny = d \tag{2}\label{eq2}$$




with the integers of the form $ax + ny$ are always multiples of $d$, so it can't be congruent to $1$ and, thus, $a$ would not have a multiplicative inverse.


elementary set theory - Prove that cardinality of power set of X is equal to $2^n$, $left | mathcal{P}(X) right | = 2^n$, using the principle of mathematical induction



I have this,




Having a set $X$ with $n$ objects. The power set of $X$ has $2^n$ elements, i.e. the number of subsets of $X$ is $2^n$.



We can consider the following statement:
$$P(n) : \left | \mathcal{P}(X) \right | = 2^{n}$$



i) P(0) is true, in fact $\left | \mathcal{P}(X) \right | = 1$ i.e. it contains the empty set as unique element.



ii) Assuming true for $P(n-1)$, we have to prove for $n$. We can consider the set X with $n$ objects as the union of two sets : $Y = \left \{ a_1, a_2, \ldots, a_{n-1} \right \}$ and $\left \{ a_{n}\right \}$, hence,
$$X = \left \{ a_1, a_2, \ldots, a_{n-1} \right \} \cup \left \{ a_{n}\right \}$$



For inductive hypothesis the numbers of subsets of $Y$ is $2^{n-1}$, so it is true, but, I have problems in proving the truth for $2^n$.




Please, can you help me? Many thanks!


Answer



All the elements of $P(X)$ either contain $a_n$ or they do not.



1) There are $2^{n-1}$ elements of $P(X)$ that do not contain $a_n$ by the induction hypothesis.



2) Take any of the elements that do not contain $a_n$ and add $a_n$ to it. You get a new unique subset. There are $2^{n-1}$ elements that can be created this way, again by the induction hypothesis. These are all the elements of $P(X)$ that contain $a_n$.



In total there are thus $2^{n-1}+2^{n-1}=2^n$ elements in $P(X)$, which proves the statement.


matrices - Definition of rank of a matrix

Can I define the rank of a matrix(A) as the number of non zero rows in RREF(A)? Here's my reason: Let number of zero rows be $x$
Then these rows are the linearly dependent rows of A and $x=dim(left null space)=m-r$.
So number of non zero rows is equal to $rows-x=m-(m-r)=r$.

Sunday, December 17, 2017

abstract algebra - Factor Rings over Finite Fields



Given a polynomial ring over a field $F[x]$, I can factor, for example, the ideal generated by an irreducible polynomial $ax^2 + bx + c$: $F[x]/\left$, and guarantee that this factor ring is also a field.



My question concerns the structure of this factor ring. For example, if I consider the factor ring $Z_p[x] / \left$ for some irreducible polynomial $ax^2 + bx + c$, I can guarantee, for example, that this field has $p^2$ elements.



I am unsure why this is the case. My understanding is that the coset representitives of this factor ring are possible remainders by division by $ax^2 + bx + c$. Is this the right idea, and how would I know that two different remainders aren't in the same coset? Thanks.


Answer




Indeed the elements of the factor ring $\Bbb{F}_p[x]/\langle ax^2+bx^2+c\rangle$ can be represented by the remainders by division by $ax^2+bx+c$. This is true because we can divide polynomials in $\Bbb{F}_p[x]$ by $ax^2+bx+c$ with remainder. What this means is that




For every polynomial $f\in\Bbb{F}_p[x]$ there exist unique $q,r\in\Bbb{F}_p[x]$ with $\deg r<2$ such that
$$f=q(ax^2+bx+c)+r.\tag{1}$$




This equality shows that $f$ and $r$ are in the same coset of $\langle ax^2+bx+c\rangle$, and hence they are mapped to the same element of the factor ring $\Bbb{F}_p[x]/\langle ax^2+bx+c\rangle$. Hence the image of
$f$ in the factor ring is represented by $r$, and so every element of the factor ring is represented by a linear polynomial.




To see that no two linear polynomials represent the same element of $\Bbb{F}_p[x]/\langle ax^2+bx+c\rangle$, it suffices to note that the remainder $r$ in $(1)$ is unique for every $f\in\Bbb{F}_p[x]$, meaning in particular that every linear polynomial is represented only by itself.



Alternatively, if two linear polynomials $r$ and $r'$ represent the same coset of $\langle ax^2+bx+c\rangle$ in the factor ring, then $r-r'$ is a multiple of $ax^2+bx+c$. Because $\deg r-r'<\deg(ax^2+bx+c)$ it follows that $r-r'=0$.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...