Tuesday, December 31, 2019

Counting the number of zeros

I am stuck at the question:



How many zeros are there when numbers between 1 and 100 are multiplied including 1 and 100, devise some technique for this .



Regards

algebra precalculus - A cubed trigonometric identity?

Could somebody please show why the following is a trigonometric identity?



$$\dfrac{\sin^3 a - \cos^3a}{\sin a - \cos a} = 1 + \sin a \cos a$$



This problem appears on page $48$ of Gelfand's and Saul's "Trigonometry". (It's not homework.)




It is probably the fact that we are dealing with trig ratios cubed that is throwing me off.



A question with squared trig ratios usually gives me no troubles.



I keep running into a mess. For example: I've multiplied the numerator and denominator by $\sin a + \cos a$ with no luck; and likewise, by $\sin a - \cos a$.

real analysis - Prove that the graph of a continuous function on $(-infty, infty)$ is completely determined once one knows a certain countable set of points on it.



A question from Introduction to Analysis by Arthur Mattuck:




Prove that the graph of a continuous function on $(-\infty, \infty)$ is completely determined once one knows a certain countable set of points on it.




I have no idea.



Answer



Hint: $\mathbb Q$ is countable and dense in $\mathbb R$.


Monday, December 30, 2019

math history - Why were Lie algebras called infinitesimal groups?



Why were Lie algebras called infinitesimal groups in the past? And why did mathematicians begin to avoid calling them infinitesimal groups and switch to calling them Lie algebras?


Answer



I don't know anything about the actual history, but you might want to look at this blog post by Terry Tao. Terry defines a "local group" to be something that looks like a neighborhood of the identity in a topological group and a "local Lie group" to be something that looks like a neighborhood of the identity in a Lie group.




He then defines a "group germ" to be an equivalence class of local groups, where the equivalence relation should be thought of as "become the same under passing to a sufficiently small open neighborhood of the identity". Thus, the group germ remembers all the information which can be seen on an arbitrarily small neighborhood of the identity. I think this is a very good modern rigorous analogue of the notion of an infinitesimal group. Terry then proves Lie's Third Theorem (Theorem 2 in his notes): Germs of local lie groups are in bijection with Lie algebras.



If you prefer algebra to analysis, the corresponding idea is a formal group. Again, it is a true but nontrivial theorem that formal groups in characteristic zero are equivalent to Lie algebra.


linear algebra - To find eigenvalues of matrix with all same element


How many distinct eigenvalues are there in the matrix.


$$ \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ \end{bmatrix} $$



I was wondering that is there any specific eigenvalues for matrices like this??


I hope I wouldn't have to find the determinant of this 4×4 matrix.


Answer



Note that $$\left(\begin{matrix}1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\end{matrix}\right)\left(\begin{matrix} a\\ b\\ c\\ d\end{matrix}\right)=\left(\begin{matrix} a+b+c+d\\ a+b+c+d\\ a+b+c+d\\ a+b+c+d\end{matrix}\right)$$ If $a=b=c=d$, then $$\left(\begin{matrix}1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\end{matrix}\right)\left(\begin{matrix} a\\ a\\ a\\ a\end{matrix}\right)=4\left(\begin{matrix} a\\ a\\ a\\ a\end{matrix}\right)$$ Hence $4$ is an eigenvalue.


Also see that since all the columns of the matrix are same, the rank of the matrix is $1$. So $4$ is the only non zero eigenvalue. $0$ is the other eigenvalue, with eigenvector, for example $(a~-a~a~-a)^t$.


trigonometry - $sum cos$ when angles are in arithmetic progression




Prove $$\cos(\alpha) + \cos(\alpha + \beta) + \cos(\alpha + 2\beta) + \dots + \cos[\alpha + (n-1)\beta] = \frac{\cos(\alpha + \frac{n-1}{2}\beta) \cdot \sin\frac{n\beta}{2}}{\sin\frac{\beta}{2}} $$

Sunday, December 29, 2019

functional analysis - $f_n rightarrow 0$ in $L^1$ $implies sqrt{f_n} rightarrow 0$ also?



Let $(X,\Sigma,\mu)$ be a finite measure space, and let $\{f_n : n \in \mathbb{N} \}$ be a sequence of non-negative measurable functions converging in the $L^1$ sense to the zero function. Show that the sequence $\{\sqrt{f_n}:n \in \mathbb{N} \}$ also converges in the $L^1$ sense to the zero function.



So I have to somehow show that




$$
\lim_{n \to \infty}\int_X\lvert\sqrt{f_n(x)}\rvert\;\mathbb{d}\mu(x) = 0
$$



If I'm honest I don't really know where to start. I think it's an easy question, but I'm new to this stuff. Any help appreciated!


Answer



You have the right choice of $p = q = 2$. However, choose $u = 1$, $v = \sqrt{f_n}$, then



$$\int_X \left|\sqrt{f_n}\right| \ d\mu = \left\| 1.\sqrt{f_n} \right\|_1 \ \leq \ \left\| 1 \right\|_2 \ \ \left\| \sqrt{f_n}\right\|_2 = \ ...$$


linear algebra - How to calculate this determinant?




How to calculate this determinant?




$$A=\begin{bmatrix}n-1&k&k&k&\ldots& k\\k&n-1&k&k&\ldots &k\\\ldots&\ldots&\ldots &&\ldots\\\\k&k&k&k&\ldots &n-1\\
\end{bmatrix}_{n\times n}$$





where $n,k\in \Bbb N$ are fixed.



I tried for $n=3$ and got the characteristic polynomial as $(x-2-k)^2(x-2+2k).$



How to find it for general $n\in \Bbb N$?


Answer



Here I've followed the same initial step as K. Miller. Instead of using a determinant identity I examine the eigenvalues $A$ and consider their product.



If $J$ denotes the $n\times n$ matrix of all $1$'s, then then eigenvalues of $J$ are $0$ with multiplicity $n-1$ and $n$ with multiplicity $1$. This can be seen by noting that $J$ has $n-1$ dimensional kernel and trace $n$.




Your matrix $A$ is exactly $kJ+(n-k-1)I$ where $I$ denotes the $n\times n$ identity matrix. The eigenvalues of $A$ are therefore $n-k-1$ with multiplicity $n-1$ and $nk+n-k-1$ with multiplicity $1$. The determinant of $A$ is then $(nk+n-k-1)(n-k-1)^{n-1}$.


measure theory - $ lim_{jrightarrow infty}mu(A_j) implies lim_{jrightarrow infty}int_{A_j} u dmu=0, forall uin mathcal L^1(mu)$




Let $(X,\mathcal A,\mu)$ be a $\sigma$-finite measure space.



Suppose $(A_j)_{j\in\mathbb N}\subset \mathcal A$ satisfies
$\mu(A_j)\rightarrow 0$ as $j\rightarrow \infty$. Show that
$$\lim_{j\rightarrow \infty}\int_{A_j} u d\mu=0, \forall u\in \mathcal L^1(\mu)$$





If I'd knew that the sequence $(A_j)_{j\in\mathbb N}\subset \mathcal A$ was increasing then I would have use the fact that
$\lim_{j\rightarrow \infty}\mu(A_j)=0$ and the continuity of the measure to see that
$\mu( \lim_{j\rightarrow \infty} A_j)=0$ hence
$$\lim_{j\rightarrow \infty} A_j=N\in\mathcal N_{\mu}$$



Then using the fact that $u$ is integrable I could have applied Beppo-Levi theorem to
$$\lim_{j\rightarrow \infty} \int 1_{A_j}u^{\pm}d\mu$$ leading to



$$\int 1_{N}u^{\pm}d\mu$$ which is the integral of $u$ over a $\mu$-null set and hence equal $0$.




But the problem is that I don't know whether the sequence $(A_j)_{j\in\mathbb N}\subset \mathcal A$ is decreasing, increasing or none.



I was advised to use Vitali's convergent theorem, but I don't really see how :



Let $f_j:=u1_{A_j}$, then it's a sequence of measurable functions,



We see that for any $\epsilon>0$
$$\mu(\{|f_j-0|>\epsilon\}\cap A_j)<\mu(A_j)\rightarrow 0$$
but this does not ensure that the sequence $(f_j)_j$ converges in measure to $0$ basically because to make that assertion we need




$$\mu(\{|f_j-0|>\epsilon\}\cap A)\rightarrow 0, \color{red}{ \forall A\in \mathcal A, \mu(A)<\infty}$$



How can I solve this?



Thanks in advance.


Answer



Let $u_N=u\mathbf 1_{\{\left\lvert u\right\rvert\leqslant N\}}$; then
$$
\left\lvert\int_{A_j}u \right\rvert\leqslant \left\lvert\int_{A_j}u_N \right\rvert

+\left\lvert\int_{A_j}(u-u_N) \right\rvert\leqslant \mu(A_j)N+\int_X\left\lvert u-u_N\right\rvert.
$$


sequences and series - Proof for this identity: $sumlimits_{k=0}^infty(zeta(2k+3)-1) = frac{1}{4}$

$$ \sum_{k=0}^\infty (\zeta(2k+3) -1) = \frac{1}{4} $$ I found this nice identity and want to prove it, but I don't know how. My first thought was that I can create a geometric series with the same limit like: $$\sum_{k=0}^\infty \frac{1}{8} \cdot \frac{1}{2^k} = \frac{1}{4}$$ But I have no idea, how to write the first one in a way, that these are the same. They are obviously only equal, if we sum infinitely many terms together. Do you have ideas, how to proof this identity or do you know, where the proof can be found? Thank you for your answers!

combinatorics - Binomial coefficients identity: $sum i binom{n-i}{k-1}=binom{n+1}{k+1}$




I am trying to prove



$
\sum_{i=1}^{n-k+1} i \binom{n-i}{k-1}=\binom{n+1}{k+1}
$



Whichever numbers for $k,n$ I try, the terms equal, but when I try to use induction by n, I fail to prove the induction step:



Assume the equation holds for $n$. Now by Pascal's recursion formula,




$
\binom{n+2}{k+1}=\binom{n+1}{k+1} + \binom{n+1}{k}\\
=\sum_{i=1}^{n-k+1} i \binom{n-i}{k-1}+\binom{n+1}{k},
$



by induction assumption. In order to complete the proof, I would need to show



$
(n-k+2) \binom{n-(n-k+2)}{k-1} = \binom{n+1}{k}
$




but the left-hand side is zero. What am I doing wrong?



EDIT:



There were links to similar questions when my question was marked as duplicate. However, these links are now gone, so I add them here as they were useful to me:





(I did search, but did not found these.)



Answer



No, to complete the proof along these lines you need to show that



$$\sum_{i=1}^{n-k+2}i\binom{n+1-i}{k-1}=\sum_{i=1}^{n-k+1}i\binom{n-i}{k-1}+\binom{n+1}k\;;\tag{1}$$



you forgot that the upper number in the binomial coefficient changes when you go from $n$ to $n+1$.



You can rewrite $(1)$ as



$$(n-k+2)\binom{k-1}{k-1}+\sum_{i=1}^{n-k+1}i\left(\binom{n+1-i}{k-1}-\binom{n-i}{k-1}\right)=\binom{n+1}k\;,$$




whose lefthand side reduces to



$$n-k+2+\sum_{i=1}^{n-k+1}i\binom{n-i}{k-2}$$



by Pascal’s identity. This in turn can be rewritten as



$$n-k+2+\sum_{i=1}^{n-k+2}i\binom{n-i}{k-2}-(n-k+2)\binom{k-2}{k-2}=\sum_{i=1}^{n-k+2}i\binom{n-i}{k-2}\;.$$



If you took the right induction hypothesis — namely, that the equation holds for $n$ and all $k$ — then your induction hypothesis allows you to reduce this last summation to a single binomial coefficient, which proves to be the one that you want.



Is there a formula for finding the nth number in a sequence with a changing pattern


If a sequence has a pattern where +2 is the pattern at the start, but 1 is added each time, like the sequence below, is there a formula to find the 125th number in this sequence? It would also need to work with patterns similar to this. For example if the pattern started as +4, and 5 was added each time.



2, 4, 7, 11, 16, 22 ...




Answer



Let $a_1 = 2$. From the way you defined the sequence you can see that $a_n - a_{n-1} = n$. We can use this to find \begin{align} a_n &= a_{n-1} + n\\ &= a_{n-2} + (n-1) + n\\ &= a_{n-3} + (n-2) + (n-1) + n\\ &\vdots \\ &= a_1 + 2 + \cdots + (n - 2) + (n-1) + n \end{align} which is just the sum of the natural numbers except 1($1 + 2 + \cdots + n = \frac{n(n+1)}{2}$). So \begin{equation} a_n = a_1 + \frac{n(n+1)}{2} - 1 = 2 - 1 + \frac{n(n+1)}{2} = \frac{n^2 + n + 2}{2} \end{equation} where $a_1$ is the starting number (in this case 2). This sequence is a quadratic sequence as it exhibits second differences(the difference of the differences is constant).


Saturday, December 28, 2019

cardinals - Why are the cardinality of $mathbb{R^n}$ and $mathbb{R}$ the same?

As I study the first part of abstract algebra, I have a question: why $\left\vert{\mathbb R}\right\vert = \left\vert{\mathbb R^2}\right\vert$?


And moreover, I knew the fact that $\left\vert{\mathbb R}\right\vert = \left\vert{\mathbb R^n}\right\vert$.


Does the equality apply to any infinite set? How can I prove them?

calculus - Is there a pattern to expression for the nested sums of the first $n$ terms of an expression?




Apologies for the confusing title but I couldn't think of a better way to phrase it. What I'm talking about is this:



$$ \sum_{i=1}^n \;i = \frac{1}{2}n \left(n+1\right)$$

$$ \sum_{i=1}^n \; \frac{1}{2}i\left(i+1\right) = \frac{1}{6}n\left(n+1\right)\left(n+2\right) $$
$$ \sum_{i=1}^n \; \frac{1}{6}i\left(i+1\right)\left(i+2\right) = \frac{1}{24}n\left(n+1\right)\left(n+2\right)\left(n+3\right) $$



We see that this seems to indicate:



$$ \sum_{n_m=1}^{n}\sum_{n_{m-1}=1}^{n_m}\ldots \sum_{n_1=1}^{n_2} \; n_1 = \frac{1}{m!}\prod_{k = 0}^{m}(n+k) $$



Is this a known result? If so how would you go about proving it? I have tried a few inductive arguments but because I couldn't express the intermediate expressions nicely, I didn't really get anywhere.


Answer



You should have

$$\sum_{i=1}^{n} 1 = n$$
$$\sum_{i=1}^{n} i = \frac{1}{2} n(n+1)$$
$$\sum_{i=1}^{n} \frac{1}{2} i(i+1) = \frac{1}{6} n(n+1)(n+2)$$
$$\sum_{i=1}^{n} \frac{1}{6} i(i+1)(i+2) = \frac{1}{24} n(n+1)(n+2)(n+3)$$



In particular, the first sum of yours was wrong and the things you were adding should depend on $i$, not on $n$.



But, to answer the question, yes! This is a known result, and actually follows quite nicely from properties of Pascal's triangle. Look at the first few diagonals of the triangle and see how they match up to your sums, and see if you can explain why there's such a relation, and why the sums here can be written in terms of binomial coefficients. Then, the hockey-stick identity proves your idea nicely.


elementary number theory - Solving a Linear Congruence




I've been trying to solve the following linear congruence with not much success:
19 congruent to $19\equiv 21x\pmod{26}$



If anyone could point me to the solution I'd be grateful, thanks in advance


Answer



Hint: $26 = 2 \cdot 13$ and the Chinese remainder theorem. Modulo $2$ we have to solve $1 \cong x \pmod 2$, that is $x = 2k + 1$ for some $k$, now solve $19 \cong 42k + 21 \pmod{13}$.


real analysis - Must an unbounded set in a metric space be infinite?



I'm new to real analysis and topology. Recently, I'm reading baby rudin. Occasionally, I've a question: does a set is unbounded implies the set is infinite in metric space? I think the statement is right, but I can't prove it.



Please give the strict proof. Thanks in advance.


Answer



Yes (if metric spaces are assumed to be non-empty).




Let $\langle X,d\rangle$ denote a metric space.



Suppose that a set $S\subseteq X$ is finite and let $x\in X$.



If we take $r>\max\{d(x,y)\mid y\in S\}$ then $S\subseteq B(x,r)=\{y\in X\mid d(x,y)

That means that unbounded sets cannot be finite, hence are infinite.







Note: this answer preassumes that metric spaces are not empty. If $X=\varnothing$ then the finite set $\varnothing$ is unbounded since $X=\varnothing$ is not contained in any ball centered at some $x\in X$. This because balls like that simply do not exist in that situation.


elementary number theory - Proof that $2^{222}-1$ is divisible by 3



How can I prove that $2^{222}-1$ is divisible by three?
I already have decomposed the following one: $(2^{111}-1)(2^{111}+1)$ and I understand I should just prove that $(2^{111}-1)$ is divisible by three or that $(2^{111}+1)$ is divisible by three. But how can I solve this problem?


Answer



The routine way is to invoke Fermat's little theorem: $$a^{p-1}-1\equiv 0\,(\text{mod}\,p)$$ for $\mathrm{gcd}(a,p)=1$.
Plug in $a=2^{111},p=3$.


divisibility - N is a number in base 9.Find N when n is divided by 8(in base 10)?

N is a number in base 9.Find N when n is divided by 8(in base 10)? And N can be very large.say N=32323232.....50 digits



This can be done by converting N to base 10.But time consuming.




What will be the fastest approach given that no proof is required.Like in a rapid fire round.Just Answer.



One solution is:
In base 9,divisibility check for 8 is sum of the digits(digit sum).Digit sum of N=25(i.e from 25*3+25*2).Converting 25 to base 10 gives 23.Thus the question changes to (23*2+23*2) in base 10 i.e 115 in base 10.COnverting it back to base 9 gives 137.Now sum of digit in base 9 is 1+3+7 i.e 12.So answers is (1+2%8)=3



How to arrive at the rule in bold.Is there a general rule exist for such cases?Please provide the proof for the above?



Is there any faster way to solve this?

Friday, December 27, 2019

Why is zero the only infinitesimal real number?


I am currently reading Elementary Calculus: An Infinitesimal Approach by H. Jerome Keisler and was wondering if someone could help me with an aspect treated in the book.


On page 24 he says a number $\varepsilon$ is said to be infinitely small or infinitesimal if $$-a< \varepsilon < a$$ for every positive real number $a$. He then says the only real number that is infinitesimal is zero.


I really don't get that. What I understand is that in order for a number to be considered infinitely small it has to be bigger then $-a$ and smaller then $a$. Well if I take $a$ to be $-2$ that means that $-1$ would be infinitesimal since it is bigger than $-2$ but smaller then $2$. So then how can zero be the only real number that satisfies that condition?


Answer



Your example of taking $a$ to be $2$ and concluding that $1$ is infinitesimal since it is between $-2$ and $2$ is not a good example.


The reason for this is that the definition of an infinitesimal $\varepsilon$ is that $-a \leq \varepsilon \leq a$ for every positive real number $a$. You just picked some positive real number. This has to be true for every positive real number. That means $\varepsilon$ needs to be in $[-2, 2]$ and in $[-1, 1]$ and in $[-\frac{1}{2}, \frac{1}{2}]$ and in $[-\frac{1}{1000000}, \frac{1}{1000000}]$, and so on. That same $\varepsilon$ has to be in all of these at the same time to be an infinitesimal.


The only real number that satisfies that it is between $-a$ and $a$ for every real $a > 0$ is $\varepsilon = 0$.


So any number $\varepsilon$ other than $0$ that satisfies $-a \leq \varepsilon \leq a$ for every $a > 0$ real cannot itself be a real number, but there are plenty of infinitesimals that aren't real numbers. As we discussed, $0$ is the only number that's both real and infinitesimal.



calculus - Evaluating $int_0^1frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2}dx$





How can we prove $$\int_0^1\frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2}\mathrm{d} x=\frac{2\pi}{3\sqrt 3}?$$




Thought 1
It cannot be solved by using contour integration directly. If we replace $-1/3$ with $-2/3$ or $1/3$ or something else, we can use contour integration directly to solve it.
Thought 2
I have tried substitution $x=t^3$ and $x=1-t$. None of them worked. But I noticed that the form of $1-x+x^2$ does not change while applying $x=1-t$.
Thought 3
Recall the integral representation of $_2F_1$ function, I was able to convert it into a formula with $_2F_1\left(2/3,1;4/3; e^{\pi i/3}\right)$ involved. But I think it will only make the integral more "complex". Moreover, I prefer a elementary approach. (But I also appreciate hypergeometric approach)


Answer



The solution heavily exploits symmetry of the integrand.



Let $$I = \int_0^1\frac{x^{2/3}(1-x)^{-1/3}}{1-x+x^2} dx $$
Replace $x$ by $1-x$ and sum up gives

$$\tag{1} 2I = \int_0^1 \frac{x^{2/3}(1-x)^{-1/3} + (1-x)^{2/3}x^{-1/3}}{1-x+x^2} dx = \int_0^1 \frac{x^{-1/3}(1-x)^{-1/3}}{1-x+x^2} dx$$






Let $\ln_1$ be complex logarithm with branch cut at positive real axis, while $\ln_2$ be the one whose cut is at negative real axis. Denote
$$f(z) = \frac{2}{3}\ln_1(x) - \frac{1}{3}\ln_2 (1-x)$$
Then $f(z)$ is discontinuous along the positive axis, but have different jump in $\arg$ across intervals $[0,1]$ and $[1,\infty)$.



Now integrate $g(z) = e^{f(z)}/(1-z+z^2)$ using keyhole contour. Let $\gamma_1$ be path slightly above $[0,1]$, $\gamma_4$ below. $\gamma_2$ be path slightly above $[1,\infty)$, $\gamma_3$ below. It is easily checked that
$$\int_{\gamma 1} g(z) dz = I \qquad \qquad \int_{\gamma 4} g(z) dz = I e^{4\pi i/3}$$

$$\int_{\gamma 2} g(z) dz = e^{\pi i/3} \underbrace{\int_1^\infty \frac{x^{2/3}(x-1)^{-1/3}}{1-x+x^2} dx}_J\qquad \int_{\gamma 3} g(z) dz = e^{\pi i} J$$



If we perform $x\mapsto 1/x$ on $J$, we get $\int_0^1 x^{-1/3}(1-x)^{-1/3}/(1-x+x^2)dx$, thus $J = 2I$ by $(1)$.



Therefore $$I(1-e^{4\pi i/3}) + 2I(e^{\pi i / 3} - e^{\pi i}) = 2\pi i\times \text{Sum of residues of } g(z) \text{ at } e^{\pm 2\pi i /3}$$
From which I believe you can work out the value of $I$.


Thursday, December 26, 2019

reference request - List videos of interesting courses at the doctoral level.



Many mathematics departments has provided video lessons their courses (usually one semester) that are offered in their doctoral programs in mathematics. Most often these courses total average of 26 videos each containing a single class. Below is a list of introdutory courses on video.





I would appreciate if people could add on to this list.



Observation To a greater audience reach classes should preferably be in English language and introdutory curses.




Observation I will not be specific as to the area of investigation. Since I believe that the answers here will be useful for students who are not my area of interest. But I have a preference for introdutory courses of Ergodic Theory, Probability, Stochastic Processes, Combinatorics and Statistical Mechanics.


Answer



There is a very much related thread on MathOverflow with a lot of links.


Integer+fraction vs Top-heavy fraction



What is the name of a fraction like this:




$$\frac{22}{7}$$



as opposed to one like this:



$$3\frac{1}{7}$$



I've never actually had to describe this until today. Not only have I no idea how to describe the top-heavy fraction or the one with a whole number, I also don't even know what the process of changing from the former to the latter is called. Is it merely "simplifying"...? I have a feeling there must be a more technical name, because it's math and there always is.



Caveat: This is a very simple question, so much so that I feel sure it must be a duplicate. I did try to search but I was unable to come up with a succinct expression to describe what I am looking for (as evidenced by the fairly lame title of this question) and so the search came up empty.



Answer



The former is generally called an "improper fraction," the latter a "mixed number."


Wednesday, December 25, 2019

trigonometry - Prove that $sinfrac{pi}{14}$ is a root of $8x^3 - 4x^2 - 4x + 1=0$



Prove that $\sin\frac{\pi}{14}$ is a root of $8x^3 - 4x^2 - 4x + 1=0$.



I have no clue how to proceed and tried to prove that the whole equation becomes $0$ when $\sin\frac{\pi}{14}$ is placed in place of $x$ but couldn't do anything further. I think the symbols might be different but can be the same. If it is correct, please help me to solve this; if the equation is wrong, then please modify it and solve it.


Answer



First, you have, for any integer $n>0$ and any real $a, b$ with $b \not \equiv 0 \pmod{2\pi}$,



$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$




I'll prove this formula a bit later, but it's a rather classic one.



With $n=7, a=\frac{\pi}{14} \textrm{and}\; b=\frac{2\pi}{7}$, you get



$$\sin \frac{\pi}{14} + \sin \frac{5\pi}{14} + \sin \frac{9\pi}{14} + \sin \frac{13\pi}{14} + \sin \frac{17\pi}{14} + \sin \frac{21\pi}{14} + \sin \frac{25\pi}{14}=0$$



Now, using $\sin (\theta \pm \pi)= - \sin \theta$, notice that
$$\sin \frac{9\pi}{14} = \sin \frac{5\pi}{14}$$
$$\sin \frac{13\pi}{14} = \sin \frac{\pi}{14}$$

$$\sin \frac{17\pi}{14} = - \sin \frac{3\pi}{14}$$
$$\sin \frac{21\pi}{14} = - 1$$
$$\sin \frac{25\pi}{14} = - \sin \frac{3\pi}{14}$$



So, the equality becomes



$$\sin \frac{\pi}{14} - \sin \frac{3\pi}{14} + \sin \frac{5\pi}{14} = \frac{1}{2}$$



Using $\sin p - \sin q = 2 \sin \frac{p-q}{2} \cos \frac{p+q}{2}$, you get




$$\sin \frac{\pi}{14} + 2 \sin \frac{\pi}{14} \cos \frac{4\pi}{14} = \frac{1}{2}$$



Or



$$\sin \frac{\pi}{14} \left(1 + 2 \cos \frac{4\pi}{14}\right) = \frac{1}{2}$$



But



$$\cos \frac{4\pi}{14} = \sin \left(\frac{\pi}{2} - \frac{4\pi}{14} \right) = \sin \frac{3\pi}{14}$$




And we have also $\sin 3\theta = 3\sin \theta - 4 \sin^3 \theta$, so



$$\cos \frac{4\pi}{14} = 3\sin \frac{\pi}{14} - 4\sin^3 \frac{\pi}{14}$$



Let $x=\sin \frac{\pi}{14}$, we have then the equation



$$x (1+6x-8x^3) = \frac{1}{2}$$



Or




$$16 x^4-12x^2-2x+1=0$$



But $-\frac{1}{2}$ is a trivial root of $16 x^4-12x^2-2x+1$, so this polynomial is divisible by $2x+1$. And since obviously $\sin \frac{\pi}{14} \neq -\frac{1}{2}$, we can do polynomial division, and the equation becomes



$$8x^3-4x^2-4x+1=0$$



And we are done.



As a side comment, by changing slightly the proof, starting from $\sin \frac{\pi}{14} - \sin \frac{3\pi}{14} + \sin \frac{5\pi}{14} = \frac{1}{2}$, you would discover that the other two roots are $-\sin \frac{3\pi}{14}$ and $\sin \frac{5\pi}{14}$.







Now, the proof of the sum



$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$



There is an almost trivial proof using complex numbers, but here we are asked not to use them, so we will do this by induction.



First, the formula is true for $n=1$, for it amounts to $ \sin a = \sin a$.
Let's suppose it's true for $n$, we will compute




$$A=\frac{\sin \frac{(n+1)b}{2}}{\sin \frac{b}{2}} \sin \left( a+ n\frac{b}{2}\right) - \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$



Using $2\sin \theta \sin \phi = \cos (\theta - \phi) - \cos (\theta + \phi)$, we have



$$A=\frac{1}{2\sin \frac{b}{2}} \left[\cos \left(\frac{b}{2} - a \right) - \cos \left(a+ (2n+1)\frac{b}{2} \right) -\\
\cos \left(\frac{b}{2} - a \right) + \cos \left(a+ (2n-1)\frac{b}{2} \right) \right]$$



$$A=\frac{1}{2\sin \frac{b}{2}} \left[ \cos \left(a+ (2n-1)\frac{b}{2} \right) - \cos \left(a+ (2n+1)\frac{b}{2} \right) \right]$$




And, since $\cos q - \cos p = 2\sin \frac{p+q}{2} \sin \frac{p-q}{2}$,



$$A=\frac{1}{\sin \frac{b}{2}} \left[ \sin (a+nb) \sin \frac{b}{2} \right] = \sin (a+nb)$$



Thus,



$$\sum_{k=0}^{n} \sin (a+kb) = \sum_{k=0}^{n-1} \sin (a+kb) + \sin(a+nb)$$
$$=\frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right) + A$$
$$=\frac{\sin \frac{(n+1)b}{2}}{\sin \frac{b}{2}} \sin \left( a+ n\frac{b}{2}\right)$$




And the induction step is proved, so the formula is true for all $n>0$.






The "complex numbers" proof of the sum runs like this:



$$S = \sum_{k=0}^{n-1} e^{i(a+kb)} = e^{ia} \sum_{k=0}^{n-1} z^k = e^{ia} \frac{z^n - 1}{z - 1}$$



with $z=e^{ib}$ (and $z\neq1$ because $b \not \equiv 0 \pmod{2\pi}$)




Hence



$$S = e^{ia} \frac{e^{inb} - 1}{e^{ib} - 1}$$



There is a well-known trick to simplify, write:



$$S = e^{ia} \frac{e^{inb/2}}{e^{ib/2}} \frac{e^{inb/2} - e^{-inb/2}}{e^{ib/2} - e^{ib/2}} = e^{ia} \frac{e^{inb/2}}{e^{ib/2}} \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}}$$
$$S = e^{i(a + (n-1)b/2)} \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}}$$



Thus, taking real and imaginary parts, you get:




$$\sum_{k=0}^{n-1} \cos (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \cos \left( a+ (n-1)\frac{b}{2}\right)$$



$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$


calculus - Find the limit $ L = limlimits_{x to infty} left( x ln x + 2 x ln sin frac{1}{sqrt{x}} right) $.


Problem: Find the limit $ \displaystyle L = \lim_{x \to \infty} \left( x \ln x + 2 x \ln \sin \frac{1}{\sqrt{x}} \right) $.



Please suggest how to proceed in this problem. I will be grateful to you. Thanks.

number theory - Divisibility tests for $10n pm 1$



I'm looking for a divisibility test for numbers of the form $10n \pm1$



I need the test to be a summation across digits like the one for $11$, that being, a number $\overline{d_n \ldots d_1 }$ is divisible by 11 if

$$ \sum_{i \; even} d_{i+1} - d_i \pmod{11} = 0 $$



I'm after a similarly styled test for all $10n \pm1$. Is there a nice way to generate them?



Thanks in advance for your help!


Answer



Example for the number $19$ : The order of $10$ modulo $19$ is $18$, therefore you can divide the given number into $18$-digit-blocks from behind. The first block usually will have less than $18$ digits. Then, you can sum up the residues modulo $19$ of those blocks (considered as a natural number) and the given number is divisible by $19$ if and only if the sum is divisible by $19$.



A bit easier to use is the following rule : Divide the number into $9$-digit blocks from behind, the first block usually will have less than $9$ digits. Then, begin with residue modulo $19$ of the first block, then subtract the residue of the next, then add and so on. The given number is divisible by $19$ , if and only if the final result is divisible by $19$.




Example : $$5645012950185238747288629$$



Dividing gives $$5645012\ 950185238747288629$$



$5645012$ has redisue $17$ and $950185238747288629$ has redisue $2$ , the sum is $19$. Hence , the given number is disible by $19$



$$5645012\ 950185238\ 747288629$$



If we divide in $9$-digit - blocks, we get the residues $17,7,9$ and $17-7+9=19$ which is divisible by $19$, hence the given number is disivible by $19$.


Tuesday, December 24, 2019

Find terms number of arithmetic progression.




I had an exam today, within the exam, this question was the hardest.



If we have a arithmetic progression, its number of terms is $even$, total of it's $even$ terms = $30$, total of it's $odd$ terms = $24$.



the difference between the last term and the first one = $10.5$



(If nothing clear, sorry for it, I tried to translate the question into english)


Answer



Let $a,d,2m$ be the first term, the common difference, the number of terms respectively where $m\in\mathbb N$.




This answer supposes that "total of it's even terms $=30$" means that
$$(a+d)+(a+3d)+\cdots +(a+(2m-1)d)=\sum_{i=1}^{m}(a+(2i-1)d)=30,$$
i.e.
$$am+2d\cdot\frac{m(m+1)}{2}-dm=30\tag1$$



Also, this answer supposes that "total of it's odd terms $=24$" means that
$$a+(a+2d)+\cdots +(a+(2m-2)d)=\sum_{i=1}^{m}(a+(2i-2)d)=24,$$
i.e.
$$am+2d\cdot\frac{m(m+1)}{2}-2dm=24\tag2$$




And we have
$$|a+(2m-1)d-a|=10.5\tag3$$



Now solve $(1)(2)(3)$ to get $a,d,2m$.




From $(1)-(2)$, we have $d=\frac 6m$. From $(3)$, we have $(2m-1)|\frac 6m|=10.5\Rightarrow m=4$. Finally, from $(1)$, we have $d=a=\frac 32$.



elementary number theory - Linear diophantine equation $100x - 23y = -19$


I need help with this equation: $$100x - 23y = -19.$$ When I plug this into Wolfram|Alpha, one of the integer solutions is $x = 23n + 12$ where $n$ is a subset of all the integers, but I can't seem to figure out how they got to that answer.


Answer



$100x -23y = -19$ if and only if $23y = 100x+19$, if and only if $100x+19$ is divisible by $23$. Using modular arithmetic, you have $$\begin{align*} 100x + 19\equiv 0\pmod{23}&\Longleftrightarrow 100x\equiv -19\pmod{23}\\ &\Longleftrightarrow 8x \equiv 4\pmod{23}\\ &\Longleftrightarrow 2x\equiv 1\pmod{23}\\ &\Longleftrightarrow x\equiv 12\pmod{23}. \end{align*}$$ so $x=12+23n$ for some integer $n$.


complex analysis - Finding the Fourier Series of $sin(x)^2cos(x)^3$



I'm currently struggling at calculation the Fourier series of the given function



$$\sin(x)^2 \cos(x)^3$$




Given Euler's identity, I thought that using the exponential approach would be the easiest way to do it.



What I found was: $$\frac{-1}{32}((\exp(2ix)-2\exp(2ix)+\exp(-2ix))(\exp(3ix)+3\exp(ix)+3\exp(-ix)+\exp(-3ix)))$$



Transforming it back, the result is:



$$ -\frac{1}{18}(\cos(5x)+\cos(3x)+2\cos(x))$$



(I've checked my calculations multiple times, I'm pretty sure it's correct.)




Considering the point $x = 0$ however, one can see that the series I found doesn't match the original function.



Could someone help me find my mistake?


Answer



1) Trigonometric identities:
$$
\sin^2 x\cos^3x=(\sin x\cos x)^2\cos x=\left(\frac{\sin 2x}{2}\right)^2\cos x=\frac{1}{4}\sin^22x\cos x
$$
$$

=\frac{1}{4}\left(\frac{1-\cos 4x}{2}\right)\cos x=\frac{\cos x}{8}-\frac{\cos 4x\cos x}{8}
$$
$$
=\frac{\cos x}{8}-\frac{\cos 5x+\cos 3x}{16}
$$
$$
=\frac{\cos x}{8}-\frac{\cos 3x}{16}-\frac{\cos 5x}{16}
$$
2) Complex exponential:
$$

\sin^2x\cos^3x=\left(\frac{e^{ix}-e^{-ix}}{2i}\right)^2\left(\frac{e^{ix}+e^{-ix}}{2}\right)^3
$$
$$
=-\frac{1}{32}(e^{2ix}-2+e^{-2ix})(e^{3ix}+3e^{ix}+3e^{-ix}+e^{-3ix})
$$
$$
=-\frac{1}{32}(e^{5ix}+e^{3ix}-2e^{ix}-2e^{-ix}+e^{-3ix}+e^{-5ix})
$$
$$
=-\frac{1}{32}(2\cos 5x+2\cos 3x-4\cos x)

$$
$$
=\frac{1}{16}(2\cos x-\cos 3x-\cos 5x)
$$



Note: you made a mistake when you expanded $(e^{ix}-e^{-ix})^2$. I have no idea how you ended up with this $18$. You probably meant $16$.


discrete mathematics - divisibility proof




For all integers $a$, there exists an integer $b$ such that $3$ divides $a+b$ and $3$ divides $2a+b$.



I think it is false and the negation will be: There exists an integer $a$ such that for all integers $b$, $3$ does not divide $a+b$ or $3$ doesn't divide $2a+b$.
How can I choose a to prove the negation?


Answer



Let $a=1$, if $3$ divides $1+b$ and $3$ divides $2+b$.



Then $3$ must divide $2+b-(1+b)=1$ which is a contradiction.


calculus - Evaluating a complicated triple integral



I was working on a problem of trying to determine synchronization of neurons and wanted to compare an analytical result to some numerical results I achieved. The analytical result depends upon evaluating this triple integral:



$$\int_0^{2\pi}\int_0^{2\pi}\int_0^{2\pi} \sqrt{3+2\cos(\theta_1-\theta_2)+2\cos(\theta_2-\theta_3)+2\cos(\theta_3-\theta_1)} d\theta_1 d\theta_2d\theta_3$$



The triple integral comes from trying to evaluate $E\left[\sqrt{|e^{i\theta_1}+e^{i\theta_2}+e^{i\theta_3}|^2} \right]$ where the $\theta$'s are i.i.d uniform random variables on the unit circle.



Wolfram alpha wasn't able to produce a value so I question if it's even possible to evaluate, whether exactly or a decimal approximation.



Answer



It shouldn't be too hard to approximate this numerically. I did a quick Monte Carlo integration in R, taking $n = 10^8$ samples:



set.seed(42)
n = 10^8
theta1 = runif(n, 0, 2*pi)
theta2 = runif(n, 0, 2*pi)
theta3 = runif(n, 0, 2*pi)
integrand = sqrt(3+2*cos(theta1-theta2)+2*cos(theta2-theta3)+2*cos(theta3-theta1))
mu = mean(integrand)

sigma = sd(integrand)


gives that the estimated mean of the integrand is $\mu = 1.574591$ and the standard deviation of the integrand is $\sigma = 0.7215917$. This gives a standard error for that estimate of $\sigma/\sqrt{n} = 0.0000722$.



Thus we get an estimated value for the integrand of
$$(8\pi^3) \left( \mu \pm {\sigma \over \sqrt{n}} \right)$$
since the region of integration has volume $8\pi^3$. This is $390.578 \pm 0.018$.



As an analytical note, your integrand is unchanged when $\theta_1, \theta_2, \theta_3$ are rotated by the same angle around the unit circle. So you may as well set $\theta_3 = 0$. Let $\theta_3 = 0$ and you get




$$ \int_0^{2\pi} \int_0^{2\pi} \int_0^{2\pi} \sqrt{3 + 2 \cos (\theta_1 - \theta_2) + 2 \cos \theta_2 + 2 \cos \theta_1} \: d\theta_3 d\theta_2 d\theta_1$$



and since $\theta_3$ isn't in the integrand and you're working over a rectangular region, this is



$$ 2\pi \int_0^{2\pi} \int_0^{2\pi} \sqrt{3 + 2 \cos (\theta_1 - \theta_2) + 2 \cos \theta_2 + 2 \cos \theta_1} \: d\theta_2 d\theta_1$$



This probably isn't easier to do analytically but I'd think a double integral would be easier than a triple integral numerically.


calculus - Finding the limit of an integral 4



we were asked to complete this question as an exercise in my lecture.




$$
\lim_{x\to\infty} \left( \frac{1}{x^2} \int_{t=1}^x \frac{t^2}{t + |\sin t|} \, dt \right)
$$



(original image at https://i.stack.imgur.com/fvjsE.png)



Just wondering whether you could give some hints as to where to start?



I was thinking that it had something to do with the fundamental theorem of calculus. And then maybe using l'hopitals rule to determine the limit but I'm not too sure as to when to apply them and where. We haven't done anything like this in class.




Any help would be greatly appreciated.



Thanks in advance.


Answer



L\Hopital's Rule gives the limit easily. The answer is $\frac 1 2$.



Details: let $f(x)=\int_1^{x} \frac {t^{2}} {t+|\sin\, t|}dt$. Then $\lim \frac {f(x)} {x^{2}}=\lim (\frac {x^{2}} {x+|\sin\, x|}) /{2x}=\lim \frac x {2(x+|\sin\, x|)} =\lim \frac 1 {2(1+|\sin\, x|/x)}=\frac 1 2$ because $\frac {|\sin\, x|} x \to 0$ as $ x \to \infty$.


Monday, December 23, 2019

limits - Show that $frac{3^{n^2}}{n^8}$ grows faster than $2^{nlog(n)log(log(n))}$



Basically, I want to show the limit of the second function over the first function as $n$ approaches infinity is $0$, but I'm not sure how to get past the complicated nature of the second function. I would try to show that $3^{n^2}$ grows faster than $2^{n^2}$, which is faster than the second function, but the limit of those two functions is $\frac{2}{3}$ I think.



Also, the logarithms are base $2$, if that changes anything.


Answer



Well, you have that

\begin{align}
\frac{3^{n^2}}{n^8} = \exp\left( n^2\log 3\right)\cdot n^{-8} = \exp(n^2\log 3)\cdot \exp(-8\log n) = \exp(\log(3) n^2-8\log n)
\end{align}
versus
\begin{align}
\exp(\log(2)n\log(n) \log\log(n) ).
\end{align}
Since $n^2$ grows faster than $n\log n \log\log n$, then we see that $\frac{3^{n^2}}{n^8}$ grows faster than $2^{n\log n \log\log n}$.


Uniqueness of convergence in measure



Let $(X,\mathcal{M},\mu)$ be a measure space. If $\mu(X)<\infty$, then the metric $$\rho(f,g)=\int \frac{|f-g|}{1+|f-g|}$$ forms a metric on the space of measurable functions $f,g:X\rightarrow \mathbb{C}$ when we equivalence out a.e. equal function. Then we can show that $f_n \rightarrow f$ in $\rho$ if and only if $f_n \rightarrow f$ in measure.



Let $f_n \rightarrow f$ in measure over the space of measurable functions with a.e. equal functions equivalenced. Can I show that is $f$ unique even when $\mu(X)=\infty$? (By uniqueness, I mean $f_n\rightarrow f,g$ where $f = g$ a.e. is false)


Answer




Let $(X, \mathcal M, \mu)$ be a measure space, $f_n \colon X \to \mathbb C$ for $n \in \mathbb N$, $f,g \colon X \to \mathbb C$ such that $f_n \to f$ and $f_n \to g$ in measure. We will show that $f = g$ $\mu$-a. e.



$\def\abs#1{\left|#1\right|}$Let $k\in \mathbb N$ and $\epsilon > 0$, choose $n \in \mathbb N$ such that
$$ \mu\left(\abs{f_n - f} > \frac 1{2k}\right) + \mu\left(\abs{f_n - g} > \frac 1{2k}\right) < \epsilon $$
As
$$ \{\abs{f-g}> k^{-1}\} \subseteq \{\abs{f_n - f} > (2k)^{-1}\} \cup \{\abs{f_n -g} > (2k)^{-1}\} $$
by the triangle inequality, we get
\begin{align*}
\mu(\abs{f-g} > k^{-1}) &\le \mu\left(\abs{f_n - f} > \frac 1{2k}\right) + \mu\left(\abs{f_n - g} > \frac 1{2k}\right)\\
&\le \epsilon

\end{align*}
As $\epsilon$ was arbitrary, $\mu(\abs{f-g} > k^{-1}) = 0$. Therefore
$$
\mu(\abs{f-g} > 0) = \mu \left(\bigcup_k \{\abs{f-g} > k^{-1}\}\right)
\le \sum_k \mu(\abs{f-g} > k^{-1}) = 0.
$$
That is, $f = g$ $\mu$-a. e.


Nowhere continuous real valued function which has an antiderivative

My question:




Is there a function $f: \mathbb{R} \rightarrow \mathbb{R}$ that nowhere continuous on its domain, but has an antiderivative?



If there is no such a function, is it true to conclude that: to have an antiderivative, $f$ is necessary to be continuous at least at one point on its domain?



Any comments/ inputs are highly appreciated. Thanks in advance.

Sunday, December 22, 2019

Divisibility involving root of unity

Let $p$ be a prime number and $\omega$ be a $p$-th root of unity. Suppose $a_0,a_1, \dots, a_{p-1}, b_0, b_1, \dots, b_{p-1}$ be integers such that $a_0 \omega^0+a_1 \omega^1+ \dots a_{p-1} \omega^{p-1}$ and $b_0 \omega^0 + b_1 \omega^1 + \dots b_{p-1} \omega^{p-1}$ are also integers


Prove that $(a_0 \omega^0+a_1 \omega^1+ \dots a_{p-1} \omega^{p-1})-(b_0 \omega^0 + b_1 \omega^1 + \dots b_{p-1} \omega^{p-1})$ is divisible by $p$ if and only if $p$ divides all of $a_0-b_0$, $a_1-b_1$, $\dots$, $a_{p-1}-b_{p-1}$

linear algebra - Assuming $AB=I$ prove $BA=I$











Most introductory linear algebra texts define the inverse of a square matrix $A$ as such:



Inverse of $A$, if it exists, is a matrix $B$ such that $AB=BA=I$.



That definition, in my opinion, is problematic. A few books (in my sample less than 20%) give a different definition:



Inverse of $A$, if it exists, is a matrix $B$ such that $AB=I$. Then they go and prove that $BA=I$.




Do you know of a proof other than defining inverse through determinants or through using rref?



Is there a general setting in algebra under which $ab=e$ leads to $ba=e$ where $e$ is the identity?


Answer



Multiply both sides of $AB-I=0$ on the left by $B$ to get
$$
(BA-I)B=0\tag{1}
$$
Let $\{e_j\}$ be the standard basis for $\mathbb{R}^n$. Note that $\{Be_j\}$ are linearly independent: suppose that
$$

\sum_{j=1}^n a_jBe_j=0\tag{2}
$$
then, multiplying $(2)$ on the left by $A$ gives
$$
\sum_{j=1}^n a_je_j=0\tag{3}
$$
which implies that $a_j=0$ since $\{e_j\}$ is a basis. Thus, $\{Be_j\}$ is also a basis for $\mathbb{R}^n$.



Multiplying $(1)$ on the right by $e_j$ yields
$$

(BA-I)Be_j=0\tag{4}
$$
for each basis vector $Be_j$. Therefore, $BA=I$.



Failure in an Infinite Dimension



Let $A$ and $B$ be operators on infinite sequences. $B$ shifts the sequence right by one, filling in the first element with $0$. $A$ shifts the sequence left, dropping the first element.



$AB=I$, but $BA$ sets the first element to $0$.




Arguments that assume $A^{-1}$ or $B^{-1}$ exist and make no reference to the finite dimensionality of the vector space, usually fail to this counterexample.


Saturday, December 21, 2019

sequences and series - How to show that the limit $ lim_{n to infty}sin frac{n theta}{2}sin frac{(n+1) theta}{2}$ doesn't exist?




I've been trying to show that $\sum_n \sin n\theta$ diverges (for most thetas at least), and I've come up with this expression for the partial sum (up to a multiplicative constant), and now I want to show that its limit doesn't exist:



$$ \lim_{n \to \infty}\sin \frac{n \theta}{2}\sin \frac{(n+1) \theta}{2}$$



But I don't know how to proceed with this. Both terms are divergent, but that doesn't mean their product necessarily diverges (though in this case it sure seems so). Is there a straightforward way to show this limit doesn't exist?



EDIT: I want to clarify that while I did originally set out to show the divergence of a series, that's not the aim of this question, which is how to rigorously show a limit doesn't exist. I can show that the limit doesn't equal $0$, but I want to learn how to show that it can't equal any other number as well.


Answer



Note that $\sin\alpha\sin\beta=\frac12(\cos(\alpha-\beta)-\cos(\alpha+\beta))$, hence

$$\sin\frac{n\theta}2\sin\frac{(n+1)\theta}2=\frac12\left(\cos\frac\theta2-\cos\bigl((n+\tfrac12)\theta\bigr)\right) $$
so unless $\theta$ is special ...


real analysis - Is there any geometric way to characterize $e$?



Let me explain it better: after this question, I've been looking for a way to put famous constants in the real line in a geometrical way -- just for fun. Putting $\sqrt2$ is really easy: constructing a $45^\circ$-$90^\circ$-$45^\circ$ triangle with unitary sides will make me have an idea of what $\sqrt2$ is. Extending this to $\sqrt5$, $\sqrt{13}$, and other algebraic numbers is easy using Trigonometry; however, it turned difficult working with some transcendental constants. Constructing $\pi$ is easy using circumferences; but I couldn't figure out how I should work with $e$. Looking at enter image description here



made me realize that $e$ is the point $\omega$ such that $\displaystyle\int_1^{\omega}\frac{1}{x}dx = 1$. However, I don't have any other ideas. And I keep asking myself:



Is there any way to "see" $e$ geometrically? And more: is it true that one can build any real number geometrically? Any help will be appreciated. Thanks.


Answer



For a certain definition of "geometrically," the answer is that this is an open problem. You can construct $\pi$ geometrically in terms of the circumference of the unit circle. This is a certain integral of a "nice" function over a "nice" domain; formalizing this idea leads to the notion of a period in algebraic geometry. $\pi$, as well as any algebraic number, is a period.




It is an open problem whether $e$ is a period. According to Wikipedia, the answer is expected to be no.



In general, for a reasonable definition of "geometrically" you should only be able to construct computable numbers, of which there are countably many. Since the reals are uncountable, most real numbers cannot be constructed "geometrically."


Friday, December 20, 2019

summation - Why is $sum_{j = 1}^{n} dfrac{1}{n} = 1$


I encountered $\sum_{j = 1}^{n} \dfrac{1}{n} = 1$ in my textbook, but I do not understand why this summation $= 1$. My textbook provides no reasoning as to why this is the case.



My understanding is that, since there is nothing in $\dfrac{1}{n}$ that depends on $j$, it seems that we are just summing $\dfrac{1}{n}$ to itself up to $n$. However, I'm not sure how to interpret this and how it equals a value of $1$.


I apologise if there is already a question on this, but my searches have encountered nothing that addresses this specific summation. If I am mistaken, I would appreciate it if someone could please redirect me.


I would greatly appreciate it if people could please take the time to explain the reasoning behind this.


Answer



Note that $$ \sum_{j=1}^n \frac 1n = \overbrace{\frac 1n + \frac 1n + \cdots + \frac 1n}^{n\text{ times}} = n \cdot \frac 1n = 1 $$


calculus - Proving :$ frac{textrm{d}}{textrm{d}x}int^{g(x)}_{h(x)}f(t)textrm{d}t =f(g(x))g'(x)-f(h(x))h'(x). $


How to prove that :



$$ \frac{\textrm{d}}{\textrm{d}x}\int^{g(x)}_{h(x)}f(t)\textrm{d}t =f(g(x))g'(x)-f(h(x))h'(x). $$


Answer



Let us first assume that $f$ has a primitive, which we shall refer to as $F$. By the fundamental theorem of calculus, we have:


$$\int_{h(x)}^{g(x)}{f(t)\:dt}=F(g(x))-F(h(x))$$


By the chain rule, we have:


$$\frac{d}{dx}\left(f\circ g\right)=f'(g(x))g'(x)$$


As we know that $\frac{d}{dx}F(x)=f(x)$, we have:


$$\frac{d}{dx}\left(F(g(x))-F(h(x))\right)=F'(g(x))g'(x)-F'(h(x))h'(x)\\=f(g(x))g'(x)-f(h(x))h'(x)$$


Which means that:


$$\frac{d}{dt}\int_{h(x)}^{g(x)}f(t)\:dt=f'(g(x))g'(x)-f(h(x))h'(x)$$



Q.E.D.


complex analysis - Solution for the Laplace Transform $mathscr{L}left{frac{t^{alpha-1}}{t-mu}right}(beta)$




I have been looking for an explicit solution to the following Laplace transform for $\alpha,\mu,\beta>0$
\begin{equation}
\frac{\beta^\alpha}{\Gamma(\alpha)}\mathscr{L}\left\{\frac{t^{\alpha-1}}{t-\mu}\right\}(\beta)
=\frac{\beta^\alpha}{\Gamma(\alpha)}\operatorname{PV\int_0^\infty}\frac{t^{\alpha-1}}{t-\mu}e^{-\beta t}\,\mathrm dt.
\end{equation}
Notice that this is equivalent to $E((X-\mu)^{-1})$ for $X\sim\text{Gamma}(\alpha,\beta)$. I attacked this problem by substituting $x=t-\mu$ and then writing
\begin{equation}
\tag{1}
\lim_{\epsilon\to0^+}

\frac{\beta^\alpha}{\Gamma(\alpha)}e^{-\beta\mu}\left(\int_{-\mu}^0+\int_0^\infty\right) x^{\epsilon-1}(x+\mu)^{\alpha-1}e^{-\beta x}\,\mathrm dx.
\end{equation}
After evaluating these integrals, I took the limit and and then dropped the residue term $-f_X(0)\pi\mathrm i$ resulting from the pole when $\epsilon=0$. This took me about six pages to do and involves some nasty derivatives of hypergoemetric functions w.r.t. parameters, meijer $G$ functions, residue expansions, etc. The solution I got was
\begin{equation}
\frac{\beta^\alpha}{\Gamma(\alpha)}\mathscr{L}\left\{\frac{t^{\alpha-1}}{t-\mu}\right\}(\beta)
=\beta e^{-\beta\mu}\,\Re\left\{E_\alpha(-\beta\mu)\right\},
\end{equation}
where $E_\nu(z)$ is the generalized exponential integral. I have explicit solution for the real part which must be separated into cases for integer and non-integer $\alpha$. Could this solution have been easily reached with contour integrals? After ending up at such a simple solution, I feel like I solved this the hard way and there is a much easier way to do it.


Answer



This may be somewhat simpler. Let $0 < \alpha < 1, \operatorname{Im} \mu \neq 0, 0 < \beta$ (so it may be a bit misleading to refer to the integral as the Laplace transform). We have

$$\int_0^\infty \frac d {d\beta} \left(
e^{\mu \beta} \frac {t^{\alpha - 1}} {t - \mu} e^{-\beta t} \right) dt =
-\Gamma(\alpha) \beta^{-\alpha} e^{\mu \beta}, \\
\int_0^\infty \frac {t^{\alpha-1}} {t-\mu} e^{-\beta t} dt =
\underbrace {\Gamma(\alpha) (-\mu)^{\alpha-1} e^{-\mu \beta}
\Gamma(1-\alpha, -\mu \beta)}_{=I(\alpha, \mu, \beta)} +
C(\alpha, \mu).$$
The integral and $I(\alpha, \mu, \beta)$ are continuous at $\beta=0$, and
$$\int_0^\infty \frac {t^{\alpha-1}} {t-\mu} dt =
\pi (-\mu)^{\alpha-1} \csc \pi \alpha =

I(\alpha, \mu, 0), $$
from which we can conclude that $C(\alpha, \mu) = 0$.



For positive $\mu$, the p.v. integral will be equal to $I(\alpha, \mu - i0, \beta)$ plus $i \pi$ times the residue of the integrand at $t=\mu$. $\Gamma(a, z)$ is continuous from above at negative $z$, thus $I(\alpha, \mu, \beta)$ is continuous from below: $I(\alpha, \mu - i0, \beta) = I(\alpha, \mu, \beta)$, therefore
$$\operatorname{v.\!p.} \int_0^\infty
\frac {t^{\alpha-1}} {t-\mu} e^{-\beta t} dt =
I(\alpha, \mu, \beta) + i \pi \mu^{\alpha-1} e^{-\mu \beta}.$$
The result extends to $1 \leq \alpha$ by analytic continuation.


elementary set theory - Proving cardinality of the reals and the cross product of the reals

I am trying to prove that $\Bbb {R} \times \Bbb {R} \sim \Bbb {R}$ using the Cantor-Bernstein Theorem. So then that would mean that I need to prove that $|\Bbb {R}| \leq |\Bbb {R} \times \Bbb {R}|$ and $|\Bbb {R} \times \Bbb {R}| \leq |\Bbb {R}|$.




This is my work so far:



For the first part ($|\Bbb {R}| \leq |\Bbb {R} \times \Bbb {R}|$) I let $\Bbb {R} \rightarrow \Bbb {R} \times \Bbb {R}$ be defined as $f(r)=(r,r) , \forall r \in \Bbb {R}$, which is the identity function. So then also let $f(s) = (s,s), \forall s \in \Bbb {R}$, where $r \neq s$. So then since $(r,r) \neq (s,s)$, then $f(r) \neq f(s)$. Hence $f$ is injective and therefore $|\Bbb {R}| \leq |\Bbb {R} \times \Bbb {R}|$.



Is this correct so far? Am I on the right track?



I'm confused about how to go about proving the second part, that $|\Bbb {R} \times \Bbb {R}| \leq |\Bbb {R}|$. I know I need to show that there is an injection from $\Bbb {R} \times \Bbb {R} \rightarrow \Bbb {R}$ but I can't figure out how to do it. Also how would I define this function for the second part?

Thursday, December 19, 2019

calculus - Proof that two objects on earth form a tangent line with the earth



So I have this problem where there is a building on earth at point $a$ and a robot with a camera on its head starts walking away from the building and I have to calculate what distance from $a$ does the robot stop seeing the top of the building. Now I have done my calculations and found the correct answer.




My problem lies in proving that the straight line distance between the building and the robot forms a tangent with the Earth. At the start of my work the line between the robot and building was assumed to create a tangent with our planet, which was correct as I did get the right answer. Now I have to prove that assumption but I dont know how.



Its obvious that it would create a tangent as the farther away the robot walks from the structure the closer its line of sight starts coming in contact with Earth's curvature. And at a certain point, at its maximum distance, it creates a tangent but by taking one more step back it loses sight with the building because the curvature of the planet would get in the way.



Hopefully that all made sense haha please let me know if it didn't.



I was thinking about saying something with upper and lower bounds but I feel like there's a much more simple way to explain it that I'm just not seeing. I'm not looking for a solution, just a clue or hint to point me in the right direction, if possible.


Answer



Right up until the point at which the building disappears from view, the line of sight has no intersections with the (assumed) spherical Earth. After that, the line of sight goes through the Earth, so has two points of intersection with the surface. At the point that the building vanishes, the line of sight intersects the surface at exactly one point. This by definition makes that sight line tangent to the surface.


calculus - How is the modern definition of derivatives different from that of Leibniz' "infinitesimal"-based system?

I pose this question knowing fully well a somewhat similar question was asked in the popular question Is $\frac{dy}{dx}$ not a ratio?



However, as great as the answers are, the answers stop short of explaining a particular culprit I run into every time I try to understand the difference between the modern definition, and that of Leibniz. As such, at least for my own sake and possibly that of others, I feel the need to ask this question.



Derivative picture




The problem I'm having is that when I look at the page on derivatives in my calculus book, I see the all-familiar drawing detailing how to think about the limit definition of derivatives, as pictured above. This is supposedly different from Leibniz' idea of the ratio of two infinitesimal quantities - but I don't understand how.



In both cases, we have a Δy being divided by a Δx. In Leibniz' vision, Δy becomes dy and Δx becomes dx, two "infinitesimally small quantities", smaller than anything imaginable but still greater than zero. In the modern limit definition, Δy becomes dy, an unimaginably small quantity, and Δx becomes dx, again an unimaginably small quantity. That, to my untrained perspective, looks identical to Leibniz' idea of derivatives, except the concept of dy and dx being "infinitely small" is now embodied in the limit.



How is this modern definition any different from Leibniz' definition that relies on the "ghosts of departed quantities"?



Edit: I should emphasize that I'm not looking for answers through the lens of non-standard analysis. This is a regular, standard calculus question.

calculus - Possibility to simplify $sumlimits_{k = - infty }^infty {frac{{{{left( { - 1} right)}^k}}}{{a + k}} = frac{pi }{{sin pi a}}} $



Is there any way to show that



$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}} = \frac{1}{a} + \sum\limits_{k = 1}^\infty {{{\left( { - 1} \right)}^k}\left( {\frac{1}{{a - k}} + \frac{1}{{a + k}}} \right)}=\frac{\pi }{{\sin \pi a}}} $$




Where $0 < a = \dfrac{n+1}{m} < 1$



The infinite series is equal to



$$\int\limits_{ - \infty }^\infty {\frac{{{e^{at}}}}{{{e^t} + 1}}dt} $$



To get to the result, I split the integral at $x=0$ and use the convergent series in $(0,\infty)$ and $(-\infty,0)$ respectively:



$$\frac{1}{{1 + {e^t}}} = \sum\limits_{k = 0}^\infty {{{\left( { - 1} \right)}^k}{e^{ - \left( {k + 1} \right)t}}} $$




$$\frac{1}{{1 + {e^t}}} = \sum\limits_{k = 0}^\infty {{{\left( { - 1} \right)}^k}{e^{kt}}} $$



Since $0 < a < 1$



$$\eqalign{
& \mathop {\lim }\limits_{t \to 0} \frac{{{e^{\left( {k + a} \right)t}}}}{{k + a}} - \mathop {\lim }\limits_{t \to - \infty } \frac{{{e^{\left( {k + a} \right)t}}}}{{k + a}} = \frac{1}{{k + a}} \cr
& \mathop {\lim }\limits_{t \to \infty } \frac{{{e^{\left( {a - k - 1} \right)t}}}}{{k + a}} - \mathop {\lim }\limits_{t \to 0} \frac{{{e^{\left( {a - k - 1} \right)t}}}}{{k + a}} = - \frac{1}{{a - \left( {k + 1} \right)}} \cr} $$



A change in the indices will give the desired series.




Although I don't mind direct solutions from tables and other sources, I prefer an elaborated answer.






Here's the solution in terms of $\psi(x)$. By separating even and odd indices we can get



$$\eqalign{
& \sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \sum\limits_{k = 0}^\infty {\frac{1}{{a + 2k}}} - \sum\limits_{k = 0}^\infty {\frac{1}{{a + 2k + 1}}} \cr
& \sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a - k}}} = \sum\limits_{k = 0}^\infty {\frac{1}{{a - 2k}}} - \sum\limits_{k = 0}^\infty {\frac{1}{{a - 2k - 1}}} \cr} $$




which gives



$$\sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \frac{1}{2}\psi \left( {\frac{{a + 1}}{2}} \right) - \frac{1}{2}\psi \left( {\frac{a}{2}} \right)$$



$$\sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a - k}}} = \frac{1}{2}\psi \left( {1 - \frac{a}{2}} \right) - \frac{1}{2}\psi \left( {1 - \frac{{a + 1}}{2}} \right) + \frac{1}{a}$$



Then



$$\eqalign{
& \sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} + \sum\limits_{k = 0}^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a - k}}} - \frac{1}{a} = \cr

& = \left\{ {\frac{1}{2}\psi \left( {1 - \frac{a}{2}} \right) - \frac{1}{2}\psi \left( {\frac{a}{2}} \right)} \right\} - \left\{ {\frac{1}{2}\psi \left( {1 - \frac{{a + 1}}{2}} \right) - \frac{1}{2}\psi \left( {\frac{{a + 1}}{2}} \right)} \right\} \cr} $$



But using the reflection formula one has



$$\eqalign{
& \frac{1}{2}\psi \left( {1 - \frac{a}{2}} \right) - \frac{1}{2}\psi \left( {\frac{a}{2}} \right) = \frac{\pi }{2}\cot \frac{{\pi a}}{2} \cr
& \frac{1}{2}\psi \left( {1 - \frac{{a + 1}}{2}} \right) - \frac{1}{2}\psi \left( {\frac{{a + 1}}{2}} \right) = \frac{\pi }{2}\cot \frac{{\pi \left( {a + 1} \right)}}{2} = - \frac{\pi }{2}\tan \frac{{\pi a}}{2} \cr} $$



So the series become




$$\eqalign{
& \sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \frac{\pi }{2}\left\{ {\cot \frac{{\pi a}}{2} + \tan \frac{{\pi a}}{2}} \right\} \cr
& \sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} = \pi \csc \pi a \cr} $$



The last being an application of a trigonometric identity.


Answer



EDIT: The classical demonstration of this is obtained by expanding in Fourier series the function $\cos(zx)$ with $x\in(-\pi,\pi)$.



Let's detail Smirnov's proof (in "Course of Higher Mathematics 2 VI.1 Fourier series") :




$\cos(zx)$ is an even function of $x$ so that the $\sin(kx)$ terms disappear and the Fourier expansion is given by :
$$\cos(zx)=\frac{a_0}2+\sum_{k=1}^{\infty} a_k\cdot \cos(kx),\ \text{with}\ \ a_k=\frac2{\pi} \int_0^{\pi} \cos(zx)\cos(kx) dx$$



Integration is easy and $a_0=\frac2{\pi}\int_0^{\pi} \cos(zx) dx= \frac{2\sin(\pi z)}{\pi z}$ while
$a_k= \frac2{\pi}\int_0^{\pi} \cos(zx) \cos(kx) dx=\frac1{\pi}\left[\frac{\sin((z+k)x)}{z+k}+\frac{\sin((z-k)x)}{z-k}\right]_0^{\pi}=(-1)^k\frac{2z\sin(\pi z)}{\pi(z^2-k^2)}$
so that for $-\pi \le x \le \pi$ :



$$
\cos(zx)=\frac{2z\sin(\pi z)}{\pi}\left[\frac1{2z^2}+\frac{\cos(1x)}{1^2-z^2}-\frac{\cos(2x)}{2^2-z^2}+\frac{\cos(3x)}{3^2-z^2}-\cdots\right]
$$



Setting $x=0$ returns your equality :

$$
\frac1{\sin(\pi z)}=\frac{2z}{\pi}\left[\frac1{2z^2}-\sum_{k=1}^{\infty}\frac{(-1)^k}{k^2-z^2}\right]
$$



while $x=\pi$ returns the $\mathrm{cotg}$ formula :



$$
\cot(\pi z)=\frac1{\pi}\left[\frac1{z}-\sum_{k=1}^{\infty}\frac{2z}{k^2-z^2}\right]
$$
(Euler used this one to find closed forms of $\zeta(2n)$)




The $\cot\ $ formula is linked to $\Psi$ via the Reflection formula :
$$\Psi(1-x)-\Psi(x)=\pi\cot(\pi x)$$



The $\sin$ formula is linked to $\Gamma$ via Euler's reflection formula :
$$\Gamma(1-x)\cdot\Gamma(x)=\frac{\pi}{\sin(\pi x)}$$


calculus - How does one prove $int_0^infty prod_{k=1}^infty operatorname{rm sinc}left( frac{t}{2^{k+1}} right) mathrm{d} t = 2 pi$




Looking into the distribution of a Fabius random variable:
$$
X := \sum_{k=1}^\infty 2^{-k} u_k
$$
where $u_k$ are i.i.d. uniform variables on a unit interval, I encountered the following expression for its probability density:
$$
f_X(x) = \frac{1}{\pi} \int_0^\infty \left( \prod_{k=1}^\infty \operatorname{\rm sinc}\left( \frac{t}{2^{k+1}} \right) \right) \cos \left( t \left( x- \frac{1}{2} \right) \right) \mathrm{d} t
$$
It seems, numerically, that $f\left(\frac{1}{2} \right) = 2$, but my several attempts to prove this were not successful.




Any ideas how to approach this are much appreciated.


Answer



From Theorem 1 (equation (19) on page 5) of Surprising Sinc Sums and Integrals, we have
$$\frac{1}{\pi} \int_0^\infty \left( \prod_{k=1}^N \operatorname{\rm sinc}\left( \frac{t}{2^{k+1}} \right) \right) \mathrm{d} t=2$$
for all $N<\infty$. I suppose you can justify letting
$N\to \infty$ to get your result.







One of the surprises in that paper concerns a similar integral
$$ \int_0^\infty \left( \prod_{k=0}^N \operatorname{\rm sinc}\left( \frac{t}{2{k+1}} \right) \right) \mathrm{d} t.$$ This turns out to be equal to $\pi/2$ when $0\leq N\leq 6$, but is slightly less than $\pi/2$ when $N=7$.


Wednesday, December 18, 2019

Express $k$ as the sum of three square numbers


How do you express $k$ as the sum of three square numbers, if

$$m^2 + 3 = 2k$$
where both $m$ and $k$ are integers (both positive or negative if possible).




It is known that $m$ must be an odd integer, positive or negative, so that $k$ is an integer. It is also known that odd integers can be expressed as: $2n + 1$, where $n$ is an integer. Thus, how do you express $k$ as the sum of three square numbers?

sequences and series - How to derive the formula for the sum of the first $n$ perfect squares?

How do you derive the formula for the sum of the series when $S_n = \sum_{j=1}^n j^2$?


The relationship $(n, S_n)$ can be determined by a polynomial in $n$. You are supposed to use finite differences to determine the degree and then use a calculator to derive the function. But how?

calculus - series comparison test for $1/{n^{p}log(n)}$




It is really not obvious to me which values of $p$ make $\sum_2^\infty \frac{1}{n^{p}log(n)}$ converge only using comparison test (general comparison test, limit comparison test).



Am I missing something ? I knew that the series converges for $p > 1$ and diverges for $p \leq 0$ by comparing it with $1/n^{p}$ but that's it.



Edit. I know it can be solved using integral test, cauchy condensation test, etc. but I am just wondering if there is a comparison to some other series that directly results in divergence of this one for $p \leq 1$ (or just $ p < 1$ is fine, too.)


Answer



For $p<1$ compare with $\sum 1/n^q$ with $p0$
$$
\lim_{n\to\infty}\frac{\log n}{n^\epsilon}=0.

$$


modular arithmetic - How to compute $3^9 pmod {10}$




The solution given was:



$3^9 = (27)^3 = 7^3 = 49(7) = 9(7) = 63 = 3 $



I understand up to $\ 3^9 = (27)^3 $ But after that I am lost. Can someone explain how to solve this and what is going on here?


Answer



The $27$ was replaced with a $7$ because $27\pmod{10}=7$. The same concept arose with the $49$ later, as $49\pmod{10}=9$, and likewise $63\pmod{10}=3$. This is why the values greater than $10$ are switched.


Tuesday, December 17, 2019

elementary set theory - cross partition proof

I have been having troubles in how to proof the cross partition. For what I know a partition must hold two properties:





  1. The elements of the subsets that form the partition should be
    equal to U, where U is the universe set. b)

  2. The subsets that form
    the partition should be disjoint or their intersection should be the
    empty set.



I have seen the following exercise related to cross partition in a book:




enter image description here



I can assume that I have two, or more partitions, of S that holds properties 1 and 2. So if I make a new set of partitions in Ai and Bj then these will be also partitions because I am not adding any new element to any of these subsets. From this point I make the cartesian product between Ai and Bj considering only those subsets that are common, comparing the partitions that I made before of Ai and Bj or that their intersections are not the empty set. These are also partitions because I am not adding any new element that could generate any intersection on the partitions that I have formed.



Is this proof correct?

Monday, December 16, 2019

I am stuck on Fermat's Little Theorem. I know how to apply it, but does it apply here $15^{48}$ mod $53$.



I can't seem to figure out this problem. I can factor to reduce the number, but this is too time consuming. Isn't FLT suppose to help here?




Can someone provide clarification please?



FLT problem


Answer



Since $15^{48}$ is nearly $15^{52}$, we can write



\begin{align}
15^{48} &\equiv 15^{52} \cdot 15^{-4} &\pmod{53}\\
&= 15^{-4}, &\pmod{53}

\end{align}

using Fermat's little theorem.



With the extended Euclidean algorithm, one can compute $15^{-1} = -7$, and so
\begin{align}
15^{-4} &\equiv (-7)^4 &\pmod{53}\\
&\equiv 49^2 &\pmod{53}\\
&\equiv (-4)^2 &\pmod{53}\\
&\equiv 16. &\pmod{53}
\end{align}



abstract algebra - Prove that if $f,gin F[x]$ are non-constant polynomials, then $f(g)$ is a non-constant polynomial




For any polynomials $f$, $g$ $\in F[x]$ (where $F$ is a field), let $f(g)$ denote the polynomial obtained by substituting $g$ for the variable $x$ in $f$.
t,
Suppose $f,g \in F[x]$ are non-constant polynomials (by non-constant, I mean they are of degree $> 0$. So, for example, $f(x) = 5$ is constant, while $g(x) = x^4 + 2x$ is not). I need to prove that $f(g)$ is also a non-constant polynomial.



It seems like an extremely intuitive thing, and my thoughts are that it should be shown using a degree argument.



I.e., if $f,g$ are non-constant polynomials, then $deg(f)\geq 1$ and $deg(g) \geq 1$. So, I was thinking that I could show that then $deg(f(g))\geq 1$ as well, but I am not sure what the actual mechanics of showing this should be.



Could anybody please let me know how to go about proving this? Thank you ahead of time for your time and patience!


Answer




Show that if $f$ is of degree $n$ and if $g$ is of degree $m$, then $f(g)$ is of degree $mn$. To this end, note that if $f = \lambda X^n + \cdots$ and if $g = \mu X^m+\cdots$ then the unique term of degree $mn$ in $f(g)$ is $\lambda \mu^m X^{mn}$, and this is nonzero since $\lambda\mu^m\neq 0$.


elementary number theory - $3$ never divides $n^2+1$



Problem: Is it true that $3$ never divides $n^2+1$ for every positive integer $n$? Explain.



Explanation: If $n$ is odd, then $n^2+1$ is even. Hence $3$ never divides $n^2+1$, when $n$ is odd.




If $n$ is even, then $n^2+1$ is odd. So $3$ could divide $n^2+1$.



And that is where I am stuck. I try to plug in numbers for $n$ but I want a more general form of showing that $3$ can't divide $n^2+1$ when $n$ is even.


Answer



Instead of considering whether $n$ is even or odd, consider the remainder when $n$ is divided by $3$; as an example, if the remainder is $1$, we have $$n = 3k + 1 \implies n^2 + 1 = 9k^2 + 6k + 2$$



which is not divisible by $3$. There are two more cases.


real analysis - Proving that the exponential function is not uniformly continuous



I'm getting stuck on this proof to show that $\exp(x)$ is not uniformly continuous. My approach: find a way to make $\epsilon$ dependent on $\delta$ such that if you fix that $\delta$, the property would not hold for all $\epsilon\gt0$.



To do this I wrote out some things, but kept on getting stuck on equations;



We want to show that this does NOT hold for all $\epsilon\gt0$ : $|x-a|\lt\delta\implies |\exp(x)-\exp(a)|\lt\epsilon.$
We can write $$|\exp(x)-\exp(a)|=|\exp(x)(1-\exp(a-x))|$$ If we then take the $\log$ of this:
$$|\log(\exp(x)(1-\exp(a-x))|=|x+\log(1-\exp(a-x))|$$ With the $x$ returning, it starts to look like $|x-a|$ if we set $\log(1-\exp(a-x))=-a$. However, this equation only gets me more stuck.




In the end, I thought to be aiming for something like $\epsilon\geq\exp\delta$, so that if $0<\epsilon<\exp\delta$ the property doesn't hold and thus fails to hold for all $\epsilon>0$ which means the exponential function cannot be uniformly continuous.



How can I proceed to make this point? Is it right to find $\epsilon$ dependent on $\delta$ such that you can find $\epsilon>0$ for which the property doesn't hold, or is that a wrong approach?


Answer



For the function $e^x$, to disprove uniform continuity, any choice of $\epsilon > 0$ will work.


Thus, let $\epsilon=1$, and suppose for some $\delta > 0$, we have
$$|a-b| < \delta \implies |e^a-e^b| < 1$$
Leaving $b$ free, choose $\theta\in (0,\delta)$, and let $a=b+\theta$.
\begin{align*}

\text{Then}\;\;&a=b+\theta\\[4pt]
\implies\;&|a-b| < \delta\\[4pt]
\implies\;&|e^a-e^b| < 1\\[4pt]
\implies\;&e^a-e^b < 1\\[4pt]
\implies\;&(e^b)(e^{a-b}-1) < 1\\[4pt]
\implies\;&(e^b)(e^\theta-1) < 1\\[4pt]
\implies\;&e^b < \frac{1}{e^\theta-1}\\[4pt]
\end{align*}
Thus, we have
$$e^b < \frac{1}{e^\theta-1}$$

for all $b\in\mathbb{R}$, contradiction, since the function $e^x$ is not bounded above.


It follows that $e^x$ is not uniformly continuous.


As regards your question




    "can you make $\epsilon$ dependent on $\delta$?"



the answer is "no, definitely not".


To disprove uniform continuity, it's you who must prove the existence of a fixed (constant) $\epsilon > 0$ for which no $\delta > 0$ works.


Think of it as a game with two players, you, and an opponent.


You claim $e^x$ is not uniformly continuous; your opponent disagrees.


Thus, your opponent is claiming that no matter what $\epsilon > 0$ you choose, he or she can find a $\delta > 0$ such that
$$|a-b|<\delta \implies |e^a - e^b| < \epsilon$$


But note the order of selections.


Your opponent doesn't need to choose $\delta$ until after you've chosen $\epsilon$. Thus, the value of $\delta$ can depend on the declared value of $\epsilon$, but the value of $\epsilon$ can't depend on $\delta$, since it hasn't necessarily yet been declared.


Sunday, December 15, 2019

What does "maximum order elements to mod n" mean for a number n without primitive roots modulo n?




I apologize because probably this is trivial, but I do not understand the concept:




"maximum order elements to mod n for n".




This is the context: in the Wikipedia in the primitive roots modulo n page there is a table in the section "Table of primitive roots" (link) saying:





"The following is a list about maximum order elements to mod n for $n \le 36$,




And then the table also explains:




"maximum order elements to mod n (for ns with "*", the maximum order of n does not equal to the Euler totient function of n, so there are no primitive roots mod n, for other ns, they are primitive roots mod n)




Basically, I would like to understand the definition and how to calculate the maximum order elements to mod n for those numbers n that do not have primitive roots mod n.




For instance, according to the explanation in the Wikipedia, for $n=8$ the maximum order element to mod 8 are $\{3,5,7\}$, and $n=8$ does not have primitive roots modulo 8, but I do not understand how they are calculated.



UPDATE
As far as I can see, it would be as follows, but if somebody could confirm this, it would be very appreciated, because I do not see it clearly:



For instance:




$3^1 = 3 \equiv 3 \pmod8$




$3^2 = 9 \equiv 1 \pmod8$



$3^3 = 27 \equiv 3 \pmod8$



$3^4 = 81 \equiv 1 \pmod8$



$3^5 = 243 \equiv 3 \pmod8$



$3^6 = 729 \equiv 1 \pmod8$




$3^7 = 2187 \equiv 3 \pmod8$




So in this case the length of the cycle of the mods is 2 (only 1,3 and the again 1,3, etc.) and that is smaller than the totien function $\varphi(8)=4 \gt 2$ so 3 is not a primitive root modulo 8. Is this correct?



If it is correct then the only way to calculate the "maximum order elements to mod n" for a number n without primitive roots modulo n is as above, making all the possible exponents and deciding according to the results. Is that right?



Thank you!




UPDATE: the explanation in the answers worked like a charm, here is the Python code (very brute force but works, be aware that the variable "value" is the value for n):



from gmpy2 import gcd
def first_max_order_elem_mod_n(value):
lo = []
for i in range (1,value):
if gcd(value,i)==1:
myexp = 1
while ((i**myexp)%value)!=1:
myexp = myexp + 1

lo.append([i,myexp])

bigger = 0
current_pos = 0
for k in lo:
if k[1]>bigger:
current_pos = k[0]
bigger = k[1]
return current_pos


Answer



In ${\mathbb{Z}/n}^{\times}$, look at the set of $k$, $1\le k \le n$, for which $(k,n) = 1$ (multiplicative units in $\mathbb{Z}/n$). Calculate their orders (the smallest positive integer $i$ s.t. $k^i = 1$). Denote by $o$ the largest $i$ you get from your order calculation for all units. Then the right table entry is $o$, the left those $k$ with order $o$. In your example case for $n=8$, for all units $k$ different from 1, $k^2 =1$ mod $n$; and the units other than 1 are $\{3,5,7\}$.


sequences and series - Calculate $lim_{ntoinfty}x_n$ when $x_{n+1} = frac{1}{4}x_n(5-x_n)$





Sequence $x_n$ such that $x_0 \in [0, 2]$ is defined by $x_{n+1} = \frac{1}{4}x_n(5-x_n)$. For what values of $x_0 \in [0, 2]$ does $x_n$ converges and to what limit?




For $x_0 = 0$, $\lim_{n\to\infty}x_n = 0$. I guess that in other cases $\lim_{n\to\infty}x_n = 1$. The sequence is bounded by $\max(2, \frac{25}{16})$ from AM-GM. How can I prove my assertion that $x_n$ converges to $1$?



I've tried to prove that for $x_0 \in (0, 1)$ sequence increases and for $x_0 \in (1, 2)$ decreases, but when proving first part I got stuck, because I don't know how to prove inductive step that $x_n < 1$ implies $\frac{x_n(5-x_n)}{4} < 1$


Answer



The function $f(x)=\frac {x(5-x)} 4$ maps $[0,2]$ into itself and it is increasing there. From this it follows by induction that $x_n \in [0,1]$ for all $n$ and that $x_{n+1} \geq x_n$. Hence $a\equiv \lim x_n$ exists and since $a=\frac {a(5-a)} 4$ we get $a=0$ or $a=1$. Since $(a_n)$ is increasing it follows that $a=0$ if $x_0=0$ and $a=1$ otherwise.


Simple Modular Arithmetic How to

Most efficient way to do this question?



$25^{37}\cong (mod 55)$



Attempt:




$25^2 \cong 20 (mod55)$



Then I don't know the best way to continue... I know how to brute force it but I'm just curious what the best way is?

sequences and series - Prove $limlimits_{n to infty} frac{log (n!)}{n sqrt[n]{log 2 cdot log 3 cdots log n}}=1$




This is close to the ratio of arithmetic to geometric means for logarithms of natural numbers (here $\log$ is the natural logarithm):



$$\frac{\log 2+\log 3+\dots + \log n}{n \sqrt[n]{\log 2 \cdot \log 3 \cdots \log n}}=\frac{\log (n!)}{n \sqrt[n]{\log 2 \cdot \log 3 \cdots \log n}}$$



Numerically, the limit seems to be approaching $1$ from above, however the convergence is slow enough that I have doubts:



enter image description here



I do not know how to prove this limit, since the denominator doesn't have any nice closed form. I could use the Stirling approximation for the numerator, but what good would it do?





How can we prove the value of this limit (if it's even correct)?



$$\lim_{n \to \infty} \frac{\log (n!)}{n \sqrt[n]{\log 2 \cdot \log 3 \cdots \log n}}=1$$







Additionally, it's quite interesting to me that the sequence has a maximum at $n=35$ which looks like this:




enter image description here




As an additional question, I would like to know how to prove this maximal value (at least the fact that the sequence has only one maximum, which we then could find by simple evaluation for small $n$).




If we consider another sequence (with proper means this time, since there's $n-1$ logarithms for the $n$th element):



$$\frac{\log (n!)}{(n-1) \sqrt[n-1]{\log 2 \cdot \log 3 \cdots \log n}}$$




It shows the same behaviour but with a maximum for $n=12$.






One idea is to take logarithm of the expression:



$$\log \left(\frac{\log (n!)}{n \sqrt[n]{\log 2 \cdot \log 3 \cdots \log n}} \right)=\log \left(\log (n!) \right)- \log n -\frac{1}{n} \left( \log (\log 2)+ \dots +\log (\log n) \right)$$



I suppose this is easier to work with though I still have no idea how to prove the limit.


Answer





  • Notice that if $f : [a, \infty) \to \mathbb{R}$ is increasing, then



    $$ \int_{m-1}^{n} f(x) \, dx \leq \sum_{k=m}^{n} f(k) \leq \int_{m}^{n+1} f(x) \, dx. $$


  • Applying the previous observation to $f(x) = \log x$, we have



    \begin{align*}
    \sum_{k=1}^{n} \log k
    &= \int_{1}^{n} \log x \, dx + \mathcal{O}(\log n) \\
    &= n \log n - n + \mathcal{O}(\log n)

    \end{align*}



    where the last line follows from integration by parts.


  • Similarly, applying the previous observation to $f(x) = \log\log x$ we have



    \begin{align*}
    \sum_{k=2}^{n} \log\log k
    &= \int_{2}^{n} \log\log x \, dx + \mathcal{O}(\log\log n) \\
    &= \left[ x \log\log x \right]_{x=2}^{x=n} - \int_{2}^{n} \frac{dx}{\log x} \\
    &= n \log\log n + \mathcal{O}\left(\frac{n}{\log n}\right)

    \end{align*}



    where the last line follows from the L'Hospital's rule:



    $$ \frac{\int_{2}^{n} dx / \log x}{n / \log n} \sim \frac{\frac{1}{\log n}}{\frac{1}{\log n} - \frac{1}{\log^2 n}} \sim 1. $$


  • Therefore we have



    $$ \frac{\frac{1}{n}\sum_{k=1}^{n}\log k}{\exp\left(\frac{1}{n}\sum_{k=2}^{n}\log\log k \right)} = \frac{\log n - 1 + o(1)}{\exp(\log\log n + o(1))} = 1 + o(1)$$



    as desired. In fact, the computation tells that




    $$ \frac{\frac{1}{n}\sum_{k=1}^{n}\log k}{\exp\left(\frac{1}{n}\sum_{k=2}^{n}\log\log k \right)} = 1 + \mathcal{O}\left(\frac{1}{\log n}\right)$$



    and I believe that this bound is sharp up to generic constant.



Saturday, December 14, 2019

elementary number theory - Divisibility criteria for $7,11,13,17,19$



A number is divisible by $2$ if it ends in $0,2,4,6,8$. It is divisible by $3$ if sum of ciphers is divisible by $3$. It is divisible by $5$ if it ends $0$ or $5$. These are simple criteria for divisibility.



I am interested in simple criteria for divisibility by $7,11,13,17,19$.


Answer



$(1)$



The formulae for $2,3,5,9,11$ can be derived from $\sum_{0\le r\le n}{a_r10^r}$




Observe that $\sum_{0\le r\le n}{a_r10^r}\equiv a_0\pmod 2$



$\sum_{0\le r\le n}{a_r10^r}\equiv a_0\pmod 5$



$\sum_{0\le r\le n}a_r10^r\equiv \sum_{0\le r\le n}a_r\pmod 3$ as $9\mid(10^r-1)$



$\sum_{0\le r\le n}a_r10^r\equiv \sum_{0\le r\le n}(-1)^ra_r\pmod {11}$ as $10^r\equiv(-1)^r\pmod{11}$



$\sum_{0\le r\le n}a_r10^r\equiv(a_0+a_2+a_4+\cdots)-(a_1+a_3+a_5+\cdots)\pmod{11}$




$(2)$



$N=\sum_{0\le r\le n}a_r10^r\equiv \sum_{0\le r\le m-1}a_r10^r\pmod {10^m}\equiv \sum_{0\le r\le m-1}a_r10^r\pmod {2^m}$ as $2^s\mid 10^s$ where integer $s\ge0$



This explains why $2^m\mid N\iff $ the numbers with lower $m$ digits of $N$ is divisible by $2^m$



For example, $2524$ will be divisible by $2^2=4$ as $24$ is, but $2514$ will not be divisible by $2^2=4$ as $14$ is not.



Similarly for $5^m$




$(3)$



For any number $y$ co-prime with $10,$ we can have a reduction formula as follows:



If a number be $10a+b,$ we can find $u,v$ in integers such that $10u+y\cdot v=1$ (using Bézout's Identity)



So, $u(10a+b)+v\cdot y\cdot a=a(10u+y\cdot v)+u\cdot b=a+u\cdot b\implies 10a+b$ will be divisible by $y\iff y\mid(a+u\cdot b)$



For example if $y=7, $ we find $3\cdot7+(-2)10=1\implies u=-2,v=3$




So, $(a+u\cdot b)$ becomes $a-2b$



If $y=19,$ we find $2\cdot10+(-1)19=1\implies u=2\implies a+u\cdot b=a+2b$



We can always use convergent property of continued fractions to find $u,v$.



There is no strong reason why this can not be generalized to any positive integer bases.


Consider the Fibonacci sequence, give a proof by induction to show that 3 | f4n, for all n ≥ 1

I have to show by mathematical induction that 3 | f4n, for all n ≥ 1



Base Case : f4(1) = f4 = 3 which is divisible by 3.


Inductive Hypothesis (IH): Assume 3 | f4k for all k ≥ 1


Inductive step (IS): Show 3 | f4(k+1)


In order to show that 3 | f4(k+1), I said that 3 will always be divisible by f4k as long as k≥1 as we showed in the IH. Hence, (k+1) can be any integer as long as k≥1 and it's a multiple of 4. So, 3 | f4(k+1) will always be true. I don't know how to show this mathematically tho?

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...