Sunday, March 31, 2019

complex analysis - Evaluate the following: $int_{0}^{infty}frac{sin(ax)}{sinh(x)}dx$ and $int_{0}^{infty}frac{xcos(ax)}{sinh(x)}dx$ using rectangular contour




A problem I'm currently working on asks to evaluate the following integrals:



$\int_{0}^{\infty}\frac{\sin(ax)}{\sinh(x)}dx\;$ and $ \ \int_{0}^{\infty}\frac{x\cos(ax)}{\sinh(x)}dx$



I've seen previous questions around here that show how to do the evaluation via contour integration
(Evaluating $\int_{0}^{\infty}\frac{\sin(ax)}{\sinh(x)}dx$ with a rectangular contour and show that $\int_{0}^{\infty}\frac{x\cos ax}{\sinh x}dx=\frac{\pi^2}{4} \operatorname{sech}^2 \left(\frac{a\pi}{2}\right) $)
but this particular problem requires that I use the following contour:



enter image description here




I tried applying the previous two methods using this contour, but the final answer I end up with keeps ending up as a completely different form and I am unable to prove that the answer I got is exactly the same as the answers using the different methods linked above.



Any help would be greatly appreciated


Answer



For any $n\in\mathbb{N}^+$ and $a>0$ we have $\int_{0}^{+\infty}\sin(ax)e^{-nx}\,dx = \frac{a}{a^2+n^2}$ by integration by parts or by writing $\sin(ax)$ as $\operatorname{Im}\exp(iax)$. By expanding $\frac{1}{\sinh x}=\frac{2e^{-x}}{1-e^{-2x}}$ as a geometric series in $e^{-x}$ we get



$$\int_{0}^{+\infty}\frac{\sin(ax)}{\sinh x}\,dx = 2a\sum_{n\geq 0}\frac{1}{a^2+(2n+1)^2}. $$
On the other hand, by applying $\frac{d}{dx}\log(\cdot)$ to both sides of
$$ \cosh\left(\frac{\pi x}{2}\right)=\prod_{n\geq 0}\left(1+\frac{x^2}{(2n+1)^2}\right) $$
(Weierstrass product for the hyperbolic cosine function) we get:

$$ \frac{\pi}{2}\,\tanh\left(\frac{\pi x}{2}\right)= 2x\sum_{n\geq 0}\frac{1}{x^2+(2n+1)^2}$$
so:
$$ \boxed{\forall a\in\mathbb{R},\qquad \int_{0}^{+\infty}\frac{\sin(ax)}{\sinh x}\,dx =\frac{\pi}{2}\,\tanh\left(\frac{\pi a}{2}\right).}\tag{1} $$
In a similar way, from
$$ \int_{0}^{+\infty} x\cos(ax)e^{-nx}\,dx = \frac{n^2-a^2}{(n^2+a^2)^2}$$
by applying $\frac{d^2}{dx^2}\log(\cdot)$ to both sides of the previous Weierstrass product we get:
$$ \boxed{\forall a\in\mathbb{R},\qquad \int_{0}^{+\infty}\frac{x\cos(ax)}{\sinh(x)}\,dx = \frac{\pi^2}{4}\,\operatorname{sech}\left(\frac{\pi a}{2}\right)^2.}\tag{2}$$
You do not really need contour integration, it is already "encoded" in the mentioned Weierstrass product. See also the Herglotz trick.


calculus - Show that the sequence $lim_{nrightarrowinfty} log(n)$ does not converge




In order to show that the sequence $$\lim_{n\rightarrow\infty} \log(n)$$ does not converge I have to show:



there exists $N \geq 0$ s.t $$\left|a_{n} - L \right| \ > \epsilon$$ for each $N$ and for all choices of $L$.



Here is where my issue lies. I'm trying to find a relationship to apply. I asked for some help from my prof and he suggested $$ \log(n) > M \implies n > e^{M} $$



Now what comes to my mind from this is to somehow make this $M$ my $\epsilon$, but I don't see where to carry through the comparison..


Answer



It's enough to push the values of the sequence off to infinity, this is the purpose of the hint.




I.e. Given any positive $M$, you ought to be able to find an $N$ large enough such that for any $n>N$
$$
\log(n)>M
$$
your professor told you how to find this $N$, namely
$$
\log(n)>M\iff n>e^M
$$
so picking $n=\lceil e^M \rceil$ will do. Since choice of $M$ was arbitrary, you are done.



Official name of "sized integers", a kind of number where $00$ is not equal to $0$?

Hierarchical identifiers, labels and indexes... All can use digits as character-strings, differenciating $0$ and $00$, $1$ and $001$, but preserving all other numeric interpretations, like order ($002>001$) and freedom for represantation (some other radix).


To illustrate, suppose a typical labeling system: Geohashes are positive integer numbers with a representation where the number of digits make difference. For example the Geohash 01 is a cell identifier of a geographic location little below Ross Sea with ~115000 km², and 0001 a cell far below, with 3.5 km².


Question: what the name of this class of numeric representation where number of digits (including leading zeros) make diference?


How about sized integers, good name? There are some "official Mathematic naming rules", when a official name not exists for a mathematical structure?



NOTE



I am looking for the Mathematical name... but, if there are no one, we can use other sources.


There are in Computing the name "Fixed-width integers" (FWInt) to say for example "fixed-width integers of 64 bits", where "x bits" is the name of a subclass. So, as I show in the examples below, we need a name for the union of many subclasses of FWInt (e.g. union of FWInt of 1 bit, FWInt of 2 bits, FWInt of 3 bits... and FWInt of 8 bits).


Integers and its representations (encodings) are used extensively in Computing. The usual foundation to implent long-size integers is the arbitrary-precision integers (e.g. BigInt is a primitive Javascript datatype). So, a datatype that is an integer with controled size can be named sized BigInt.



Intuitive definitions and examples


It is a finite set (or must be a sequence?) of numeric representations... Limiting examples in 8 bits:



  • Example with binary representations: $X_1 = \{0, 1\}$   $X_2 = \{0, 00, 01, 1, 10, 11\}$   ...   $X_k$   ...   $X_8 = \{0,00,000,000, \ldots, 00000000, 00000001, \ldots, 11111111\}$




  • The same set $X_8$ without some (non-compatible) items, in radix 4 representation: $Y_8 = \{0, 00, 000, 0000, 0001, 0002, 0003, 001, 0010, 0011 \ldots, 3333\}$




Ordering the illustred elements... Well, order is arbitrary for a set, here is only to enhance "same prefix" grouping (the hierarchy) in the sequence, that is important in applications:


(size,value)   Binary representation    Radix4 representation
(1,0) 0
(2,0) 00 0
(3,0) 000
(4,0) 0000 00
(5,0) 00000
(6,0) 000000 000
(7,0) 0000000

(8,0) 00000000 0000
(8,1) 00000001 0001
(7,1) 0000001
(8,2) 00000010 0002
(8,3) 00000011 0003
(6,1) 000001 001
(7,2) 0000010
(8,4) 00000100 0010
(8,5) 00000101 0011
... ... ...


To sort a group, less digits first. When the number of digits are equal, use the natural number order. Same is valid for some other radix. Identifiers of the cells of Geohash global grid, for example, use the radix 32 as its "standard representation".


Spatial indexes based on Hilbert curve indexes adopt radix4 to express the spatial hierarchy (recurrent split into four regions) into the radix4 digits. The cell numbering system of S2 Geometry, and many others, use it.


Trying a formal definition


Each example is a set, $X_k$, so we can suppose that we need the name of a class of sets in $k$. The table illustrated also that the elements of $X_k$ of this (named) class of sets are ordered pairs $(l,n)$ of length $l$ and numeric value $n$.


Lets check some properties:



  • Each element of this set can be mapped into a size (bits or number of digits) and a Natural number (value).   PS: the preferred term for "size of the string" is length (or bit-length).





  • The maximum bit-length of an element of $X_k$ is a finite k. In other words, using the range set $L_k=\{1, 2, \ldots, k\}$, we can say that all element of $X_k$ has a bit-length $l \in L_k$, and there are elements for each $l$.




  • The bit-length of a Natural number $n>0$ is a function $bitLength(n)=\lceil log_2(n)+1 \rceil$. The bit-length of zero is one.



Putting all together, the set $X_k$ is a class in a finite $k$:


$$X_k = \{\forall x = (l,n) ~|~ l,n \in \mathbb{N} ~\land~ bitLength(n) \le l \le k \}$$


There is also a function $toString(l,n)$ that uniquely converts the pair $(l,n)$ into a bit string of the binary representation of n, and fills the leading zeros to the length l.


Two elements $x$ and $y$ are the same when $toString(x)=toString(y)$ or when $l$ and $n$ are equal: $$x=y ~ \iff ~ l_x=l_y \land n_x=n_y$$


And, similarly, the element $x$ is greater than $y$ by a criterion based on $l$ and $n$... But it is free, we not need to impose order, there are no special compare-criterion. No criterion is valid.




I can't say the name this kind of set, but now (edited) we have a better definition of the object, $X_L$, that need a name.

number theory - How to prove that $4^n-3n-1$ is divisible by 9?



How can I prove that $4^n-3n-1$ is divisible by $9$? I tried dividing the expression by $9$ and seeing if the terms cancelled in any predictable way but I still cannot prove it. Maybe there is a clever solution but so far I have been unable to spot it.


Answer



Call $a_n=4^n-3n-1$. Then $$a_{n+1}=4^{n+1}-3n-4=4(4^n-3n-1)+9n$$ so two consecutive terms differ by a multiple of $9$. Since $a_0=0$ is divisible by $9$, all of them are.


Saturday, March 30, 2019

Choosing two numbers $a,b,$ such that the Euclidean algorithm takes 10 steps

The question is to find $2$ integers $a$,$b$ $\in \mathbb{Z}$ for which when applying the Euclidean Algorithm for finding the $\gcd \left(a,b\right)$ precisely $10$ steps are required.
This is what I have done:
Let $\left(a,b\right)$ = $\left(427,264\right)$
The $10$ steps for the $\gcd \left(427,264 \right)$ are as follows:



$427=264 \cdot 1+163$



$264=163\cdot1+101 $




$163=101\cdot1+62 $



$101=62\cdot1+39 $



$62=39\cdot1+23 $



$39=23\cdot1+16 $



$23=16\cdot1+7 $




$16=7\cdot2+2 $



$7=2\cdot3+1 $



$2=1\cdot2+0$



I just wanna know if what I have done is right??? or if possible note the place I gone wrong??

real analysis - Limit using Poisson distribution





Show using the Poisson distribution that



$$\lim_{n \to +\infty} e^{-n} \sum_{k=1}^{n}\frac{n^k}{k!} = \frac {1}{2}$$


Answer



By the definition of Poisson distribution, if in a given interval, the expected number of occurrences of some event is $\lambda$, the probability that there is exactly $k$ such events happening is

$$
\frac {\lambda^k e^{-\lambda}}{k!}.
$$
Let $\lambda = n$. Then the probability that the Poisson variable $X_n$ with parameter $\lambda$ takes a value between $0$ and $n$ is
$$
\mathbb P(X_n \le n) = e^{-n} \sum_{k=0}^n \frac{n^k}{k!}.
$$
If $Y_i \sim \mathrm{Poi}(1)$ and the random variables $Y_i$ are independent, then $\sum\limits_{i=1}^n Y_i \sim \mathrm{Poi}(n) \sim X_n$, hence the probability we are looking for is actually
$$
\mathbb P\left( \frac{Y_1 + \dots + Y_n - n}{\sqrt n} \le 0 \right) = \mathbb P( Y_1 + \dots + Y_n \le n) = \mathbb P(X_n \le n).

$$
By the central limit theorem, the variable $\frac {Y_1 + \dots + Y_n - n}{\sqrt n}$ converges in distribution towards the Gaussian distribution $\mathscr N(0, 1)$. The point is, since the Gaussian has mean $0$ and I want to know when it is less than equal to $0$, the variance doesn't matter, the result is $\frac 12$. Therefore,
$$
\lim_{n \to \infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!} = \lim_{n \to \infty} \mathbb P(X_n \le n) = \lim_{n \to \infty} \mathbb P \left( \frac{Y_1 + \dots + Y_n - n}{\sqrt n} \le 0 \right) = \mathbb P(\mathscr N(0, 1) \le 0) = \frac 12.
$$



Hope that helps,


Friday, March 29, 2019

integration - What is the simplest technique to evaluate the following definite triple integral?


Consider the following definite triple integral:


$$ \int_0^\pi \int_0^\pi \int_0^\pi \frac{x\sin x \cos^4y \sin^3z}{1 + \cos^2x} ~dx~dy~dz $$



According to Wolfram Alpha, this evaluates to $\frac{\pi^3}{8}$, but I have no idea how to obtain this result. The indefinite integral $$ \int \frac{x \sin x}{1 + \cos^2 x}~dx $$ appears to not be expressible in terms of elementary functions. Thus, I am at a loss as to what sort of techniques might be used to evaluate this integral. For context, this is from a past year's vector calculus preliminary exam at my graduate school, so while I'm sure there are some advanced integration techniques that can be used here, I'm particularly interested in what elementary techniques might be used to evaluate the integral, as I don't think something like, for instance, residue techniques would be considered pre-requisite knowledge for taking this exam.


Answer



First off, note that the integrals w.r.t. $y$ and $z$ are quite trivial to evaluate. Then, consider $x\mapsto\pi-x$, since trig functions are symmetric about $\pi/2$.


$$I=\int_0^\pi\frac{x\sin(x)}{1+\cos^2(x)}~\mathrm dx=\int_0^\pi\frac{(\pi-x)\sin(x)}{1+\cos^2(x)}~\mathrm dx$$


Add these together and apply $\cos(x)\mapsto x$.


$$\begin{align}\frac2\pi I&=\int_0^\pi\frac{\sin(x)}{1+\cos^2(x)}~\mathrm dx\\&=\int_{-1}^1\frac1{1+x^2}~\mathrm dx\\&=\arctan(1)-\arctan(-1)\\&=\frac\pi2\end{align}\\\implies I=\frac{\pi^2}4$$


elementary number theory - My formula for sum of consecutive squares series?

I stumbled upon a specific series, who's Sum of squares of consecutive integers equals the sum of squares of the continuation of that consecutive integers.




For exmaple, this first number in the series results in: $3^2 + 4^2 = 5^2$.



The second results in: $10^2 + 11^2 + 12^2 = 13^2 + 14^2$.



The thrid: $21^2 + 22^2 + 23^2 + 24^2 = 25^2 + 26^2 + 27^2$. etc.



The formula for the series is:



enter image description here




By using; $1^2 + 2^2 + ... + n^2 = \frac{n(n+1)(2n+1)}{6}$ it is easy to prove that both side are the same.



Have I found something new? Useful?



I have a feeling that this is the only series with this specific property (consecutive sum of squares equal continuation of consecutive sum of squares), but I am unsure how to prove that.



Any comments would be appreciated.

induction - Prove $4^{n} -1$ is divisible by $3$ for all $ngeq1$



I'm doing some homework and cant seem to get my head around this inductive proof.



So far I have got:


Atom case: $4^1 -1 = 3$, so proved for basic case.


Closure:


Assume $4^k - 1$ is divisible by $3$, Show that $4^{k+1} -1$ is divisible by $3$.


So: $4^k-1 = 4^k\cdot4 -1$ but this is where I get stuck, unsure where to go from here.


Any help much appreciated!


Thanks


Answer



Hint $ $ To induct multiply the first two congruences below (by the Congruence Product Rule)


$$\begin{align} \bmod 3\!:\quad\ \ 4&\equiv 1\\[.3em] 4^{\large n}&\equiv 1\ \ \ {\rm i.e.}\ \ P(n)\\ \overset{\rm multiply}\Longrightarrow\ 4^{\large n+1} &\equiv 1\ \ \ {\rm i.e.}\ \ P(n\!+\!1) \end{align}$$



Remark $\ $ The proof is a special case of the inductive proof of the Congruence Power Rule, which is the straightforward induction extension of the Product Rule.


If congruences are unfamiliar then you can instead use the Product Rule in Divisibility form:


$$\begin{align} {\rm mod}\,\ m\!:\, A\equiv a,\, B\equiv b&\ \ \,\Longrightarrow\,\ \ AB\equiv ab\qquad\text{Congruence Product Rule}\\[3pt] m\mid A-a,\ B-b&\,\Rightarrow\, m\mid AB-ab\qquad\text{Divisibility Product Rule}\\[4pt] {\bf Proof}\quad (A-a)B+a(B&-b)\, = AB-ab\end{align}$$


Then the original proof becomes $\,\ 3\,\mid\, \underbrace{4-1}_{\large A\,-\,a},\, \underbrace{4^n-1}_{\large B\,-\,b}\,\Rightarrow\, 3\,\mid\, \underbrace{4\cdot 4^{\large n}-1\cdot 1}_{\large AB\,-\,ab} = 4^{\large n+1}-1$


algebra precalculus - Why is $frac{1}{frac{1}{0}}$ undefined?




Is the fraction



$$\frac{1}{\frac{1}{0}}$$



undefined?



I know that division by zero is usually prohibited, but since dividing a number by a fraction yields the same result as multiplying the number by the fraction's reciprocal, you could argue that



$$\frac{1}{\frac{1}{0}} = (1)\left(\frac{0}{1}\right) = 0$$




Is that manipulation permissible in this case? Why or why not?


Answer



Another way to think about this is order of operations:



$$
\frac{1}{\frac{1}{0}}=1/(1/0)
$$



I always compute what's inside the parenthesis first, which gives me undefined, and I have to stop there.


sequences and series - How to find the area. Linked with another question.








In this question we discussed why the fake proof is wrong.



But, what about the area?



The process converges to the same area of the circle ($\frac{\pi}{4}$)?



What's the area when the process repeats infinitely?



How we can prove, using calculus and limits the result?




Pi is equal to 4

calculus - How do I derive $1 + 4 + 9 + cdots + n^2 = frac{n (n + 1) (2n + 1)} 6$











I am introducing my daughter to calculus/integration by approximating the area under y = f(x*x) by calculating small rectangles below the curve.



This is very intuitive and I think she understands the concept however what I need now is an intuitive way to arrive at $\frac{n (n + 1) (2n + 1)} 6$ when I start from $1 + 4 + 9 + \cdots + n^2$.



In other words, just how came the first ancient mathematician up with this formula - what were the first steps leading to this equation? That is what I am interested in, not the actual proof (that would be the second step).


Answer



Same as you can prove sum of n = n(n+1)/2 by




*oooo
**ooo
***oo
****o


you can prove $\frac{n (n + 1) (2n + 1)} 6$ by building a box out of 6 pyramids:



enter image description here
enter image description here

enter preformatted text here



Sorry the diagram is not great (someone can edit if they know how to make a nicer one). If you just build 6 pyramids you can easily make the n x n+1 x 2n+1 box out of it.




  • make 6 pyramids (1 pyramid = $1 + 2^2 + 3^2 + 4^2 + ...$ blocks)

  • try to build a box out of them

  • measure the lengths and count how many you used.. that gives you the formula




Using these (glued) enter image description here


Thursday, March 28, 2019

number theory - Substitutions in Modular Arithmetic




I've just learned modular arithmetic today, and am really struggling to understand a certain theorem.



The theorem given to us states the following:



Let m $\in \mathbb{N}$. For any integers a, b, c, and d, if $a \equiv b \pmod m$ and $c \equiv d \pmod m$ then,




  1. $a+c \equiv b+d \pmod m$

  2. $a-c \equiv b-d \pmod m$

  3. $ac \equiv bd \pmod m$




In the next section, the notes state the following: "We can use properties of congruence to prove the (familiar) rule that an
integer is divisible by 3 if and only if the sum of its decimal digits is divisible
by 3. The key is to observe that $10 \equiv 1 \pmod 3$ and so by Theorem 5.10.3 [theorem stated above]
you can change 10 to 1 wherever it occurs. Remember that $3|n$ if and only
if $n \equiv 0 \pmod 3$."



Next, it goes through the proof it was talking about at the beginning of the first quote:




Suppose $n=d_k \cdot b_k + d_{k-1}\cdot b_{k-1} + \dots + d_1\cdot b + d_0$ where $d_k, d_{k-1},\dots, d_0$ are the digits of $n$. Also assume that $3|n$. We now have the following:



\begin{align}
n \equiv 0 \pmod 3 &\iff d_k\cdot 10^k + d_{k-1}\cdot 10^{k-1} + \dots + d_1\cdot 10 + d_0 \equiv 0 \pmod 3\\
&\iff d_k \cdot 1^k + d_{k-1}\cdot 1^{k-1} + \dots + d_1 \cdot 1 + d_0 \equiv 0 \pmod 3
\end{align}

since $10 \equiv 1 \pmod 3$.



I don't quite understand how any parts of the theorem stated above allows for substitution.




Thanks for any help.


Answer



We are not substituting anything. In an intuitive manner (though not completely rigorous), you can think of numbers congruent in a certain modulo as being equal. Hence since $10\equiv 1$ modulo $3$, we have
$$ d_k10^k+d_{k-1}10^{k-1}+\cdots+d_010^0= d_k1^k+\cdots+d_01^0=d_k+\cdots+d_0 \pmod 3.$$
Note that the only difference that occurred is that we changed all the $10$s which occurred in the expression to $1$s [to emphasize the thinking that $10$ and $1$ are really the same element, I've used $=$ in place of $\equiv$]. The reason we do this is because they are congruent mod $3$.



If you want a more formal explanation, note that regardless of $x$ and $n$, if $x=a$, then $ x^2=x\cdot x=a\cdot a=a^2$ modulo $n$ (this is part 3 of your theorem). By induction it is also true that $x^k=a^k$. Hence we can take linear combinations (this is parts 1 and 2 of your theorem) of $(1,x,x^2,\ldots,x^k)$ and get that the result is congruent to the same linear combination of $a$s. In other words, if $p$ is a polynomial and $x\equiv a$ modulo $n$, then $p(x)\equiv p(a)$ modulo $n$ as well. Now, apply this to your problem with $x=10$, $a=1$ and $n=3$. Do you see how everything works out?


limits - Why does $limlimits_{thetatoinfty}nsinfrac {theta}n=theta$


Questions:





  1. Why do we have$$\lim\limits_{\theta\to\infty}n\sin\dfrac {\theta}n=\theta\tag1$$

  2. How do we prove $(1)$?



I started off with the well known limit:$$\lim\limits_{\theta\to 0}\dfrac {\sin\theta}{\theta}=1\tag2$$

And substituted $\theta:=\dfrac \theta n$ into $(1)$ to get$$\lim\limits_{\frac \theta n\to 0}\dfrac {\sin\dfrac \theta n}{\dfrac \theta n}=\lim\limits_{\frac \theta n\to0}\dfrac {n\sin\frac \theta n}{\theta}\tag3$$
However after that, I'm not sure what to do. Apparently, the RHS of $(1)$ has a limit of $1$, so that the limit of $n\sin\frac \theta n=\theta$.

calculus - How to solve the following limit using mathematics Stirling $limlimits_{nto infty}frac{n!}{n^ne^{-n}sqrt{2pi n}}=1$



How to solve the following limit using mathematics Stirling $\lim\limits_{n\to \infty}\frac{n!}{n^ne^{-n}\sqrt{2\pi n}}$.




a) $\lim\limits_{n\to \infty}\frac{n!e^n}{n^{n+1/2}}$.



b) $\lim\limits_{n\to \infty}\frac{(2n)!}{e^{-2n}(2n)^{2n}\sqrt{n}}$.



c) $\lim\limits_{n\to \infty}\frac{(2n)!\sqrt{n}}{n!^24^{n}}$.



d) $\lim\limits_{n\to \infty}\frac{\sqrt[n]{n!}}{n}$.



For a), $\frac{n!e^n}{n^{n+1/2}}$=$\frac{\sqrt{2\pi}}{\sqrt{2\pi}}\frac{n!}{n^{n} e^{-n} \sqrt{n}}$. Therefore the limit is $\sqrt{2\pi}$




For b) and c), I don't know how we can modify $(2n)!$



For d), $\frac{\sqrt[n]{n!}}{n}=\frac{\sqrt[n]{n}\sqrt[n]{(n-1)!}}{n}$. But then I don't know how to proceed?



Could someone help?


Answer



First of all,
you need to state Stirling's theorem
in its proper form:

$\lim\limits_{n\to \infty}\frac{n!}{n^ne^{-n}\sqrt{2\pi n}}
=1
$.



By multiplying by
$\sqrt{2\pi}$,
you can restate this as
$\lim\limits_{n\to \infty}\frac{n!}{n^ne^{-n}\sqrt{ n}}
=\sqrt{2\pi}
$.




To answer (b),
replace $n$ by $2n$
above to get
$\sqrt{2\pi}
=\lim\limits_{n\to \infty}\frac{(2n)!}{(2n)^{2n}e^{-2n}\sqrt{ 2n}}
$,
or,
multiplying by $\sqrt{2}$,
$2\sqrt{\pi}

=\lim\limits_{n\to \infty}\frac{(2n)!}{(2n)^{2n}e^{-2n}\sqrt{ n}}
$.



Combining the results
for $n!$ and $(2n)!$,
which is a standard technique
to estimate
$\binom{2n}{n}$,
and using
$f(n) \approx g(n)$

to mean
$\lim_{n \to \infty} \frac{f(n)}{g(n)}
= 1
$,
we have
$n!
\approx n^ne^{-n}\sqrt{2\pi n}
$
and
$(2n)!

\approx (2n)^{2n}e^{-2n}2\sqrt{\pi n}
$



so that



$\begin{array}\\
\frac{(2n)!}{n!^2}
&\approx \frac{(2n)^{2n}e^{-2n}2\sqrt{\pi n}}{(n^ne^{-n}\sqrt{2\pi n})^2}\\
&= \frac{2^{2n}n^{2n}e^{-2n}2\sqrt{\pi n}}{n^{2n}e^{-2n}2\pi n}\\
&= \frac{4^{n}}{\sqrt{\pi n}}\\

\text{or}\\
\frac{(2n)!\sqrt{\pi n}}{n!^24^{n}}
&\approx 1\\
\end{array}
$



so that
$\lim\limits_{n\to \infty}\frac{(2n)!\sqrt{n}}{n!^24^{n}}
=\frac1{\sqrt{\pi}}
$.




For the last one,
since
$\lim\limits_{n\to \infty}\frac{n!}{n^ne^{-n}\sqrt{2\pi n}}
=1
$,
taking the $n$-th root,
this becomes



$\begin{array}\\

1
&=\lim\limits_{n\to \infty}\left(\frac{n!}{n^ne^{-n}\sqrt{2\pi n}}\right)^{1/n}\\
&=\lim\limits_{n\to \infty}\left(\frac{n!} {n^ne^{-n}}\right)^{1/n}\frac1{\sqrt{2\pi n}^{1/n}}\\
&=\lim\limits_{n\to \infty}\frac{n!^{1/n}e} {n}
\qquad\text{since } n^{1/n}\to 1 \text{ and } c^{1/n}\to 1\\
\end{array}
$



so




$\lim\limits_{n\to \infty}\frac{n!^{1/n}} {n}
= e
$.


calculus - Closed form for $int_0^1 e^{frac{1}{ln(x)}}dx$?



I want to evaluate and find a closed form for this definite integral:$$\int_0^1 e^{\frac{1}{\ln(x)}}dx.$$



I don't know where to start. I've tried taking the natural logarithm of the integral, substitution and expressing the integrand in another way but they haven't led anywhere. The approximation should be between $.2$ and $.3,$ and is probably a transcendental number.



Thanks.



Answer



Let's verify Robert Israel's find. Observe that



$$I:=\int_0^1\exp\frac{1}{\ln x}dx=\int_0^\infty\frac{1}{y^2}\exp -(y+y^{-1})dy=\frac12\int_0^\infty(1+1/y^2)\exp -(y+y^{-1})dy,$$where we have substituted $y=-1/\ln x$ and then averaged with $y\mapsto 1/y$.



In view of the integral $K_\alpha(x)=\int_0^\infty\exp[-x\cosh t]\cosh(\alpha t)dt$ and the substitution $y=\exp -t$, $$2K_1(2)=\int_{-\infty}^\infty\exp[-2\cosh t]\cosh tdt=\frac12\int_0^\infty\exp [-y-1/y](1+1/y^2)dy.$$


algebra precalculus - Finite summation











What is the proof without induction for :



$(1)$ $\sum_{i=1}^n\ i^2= \frac{n(n+1)(2n+1)}{6}$




$(2)$ $\sum_{i=1}^n\ i^3=\frac{n^2(n+1)^2}{4}$


Answer



One can give a proof that has a combinatorial flavour. We want to choose $3$ numbers from the numbers $0, 1,2,3,\dots, n$. This can be done in $\binom{n+1}{3}$ ways. We do the counting in another way.



Perhaps the smallest chosen number is $0$. Then the other two can be chosen in $\binom{n}{2}$ ways. Perhaps the smallest chosen number is $1$. Then the other two can be chosen in $\binom{n-1}{2}$ ways. Continue. Finally, the smallest of the chosen numbers could be $n-2$. Then the other two can be chosen in $\binom{2}{2}$ ways.



Reversing the order of summation we get
$$\binom{2}{2}+\binom{3}{2}+\cdots+\binom{n}{2}=\binom{n+1}{3}.$$
Now use the fact that $\binom{k}{2}=\frac{k(k-1)}{2}$. We find that
$$\frac{1}{2}\sum_{k=1}^n k^2-\frac{1}{2}\sum_{k=1}^n k =\frac{(n+1)(n)(n-1)}{3!}.$$

We know a closed form formula for $\sum_{k=1}^n k$, so from the above we can get a closed form for $\sum_{k=1}^n k^2$.



A similar idea works for $\sum k^3$. We use the combinatorial fact that
$$\sum_{k=3}^n \binom{k}{3}=\binom{n+1}{4},$$
which is proved by a counting argument similar to the one we used for the closed form of $\sum_{k=2}^n \binom{k}{2}$.



Remarks: $1$. If I put my logic hat on, the answer to your question becomes entirely different. The result cannot be proved without induction. Indeed almost nothing about the natural numbers can be proved without using induction. In the Peano axioms, omit the induction axiom scheme. We end up with a system that is very very weak. Almost any argument that has a $\dots$, or a "and so on" in it involves induction. Indeed the very *definition of $\sum_{k=1}^n k^2$ requires induction.



$2$. The proof of $\sum_{k=2}^n \binom{k^2}=\binom{n+1}{3}$, and its generalizations, is very natural, and the resulting formula has a nice structure. The closed form expression for $\sum_{k=1}^n k^2$ is definitely less attractive. It turns out that one can give a slightly messy but purely combinatorial proof of the formula
$$\sum_{k=1}^n 4k^2=\binom{2k+2}{3}.$$

so the somewhat "unnatural" $\frac{n(n+1)(2n+1)}{6}$ turns out to "come from" the more structured $\frac{1}{4}\cdot \frac{2n(2n+1)(2n+2)}{6}$.


Arithmetical progression and quadratic equation

Determine the real number $k$ with the condition that the roots of the equation $x^ {4}-(3k+2) x^ {2} +k^ {2} =0$ make the arithmetic progression?



I dont know how to start ?

calculus - I need compute a rational limit that involves roots



I need compute the result of this limit without l'hopital's rule, I tried different techniques but I did not get the limit, which is 1/32, I would appreciate if somebody help me. Thanks.



$$\lim_{y\to32}\frac{\sqrt[5]{y^2} - 3\sqrt[5]{y} + 2}{y - 4\sqrt[5]{y^3}}$$


Answer



$$\lim_{y\to32}\frac{\sqrt[5]{y^2} - 3\sqrt[5]{y} + 2}{y - 4\sqrt[5]{y^3}}$$



We set $y^{\frac{1}{5}}=x$




When $y \to 32, x \to 2$



So,we have:



$$\lim_{x \to 2} \frac{x^2-3x+2}{x^5-4x^3}=\lim_{x \to 2} \frac{(x-1) (x-2)}{x^3(x^2-4)}=\lim_{x \to 2} \frac{(x-1)(x-2)}{x^3(x-2)(x+2)}=\lim_{x \to 2} \frac{x-1}{x^3(x+2)}=\frac{1}{8 \cdot 4}=\frac{1}{32}$$


Wednesday, March 27, 2019

elementary number theory - inversing using Euclid's algorithm

The question is:



Find the inverse of 14 mod 37. I don't know how to do, so could someone please explain it? Thanks in advance.

Tuesday, March 26, 2019

The value of the following limit without using L'Hopital




Find the limit
$$
\lim_{x\to 3}\left(\frac{x}{x-3}\int_{3}^{x}\frac{\sin t}{t} dt\right)
$$
without using L'Hopital's rule.


Answer



By the Mean Value Theorem,
$$\int_3^x\frac{\sin t}{t}=(x-3) \frac{\sin c_x}{c_x}$$ for some $c_x$ between $3$ and $x$. So our product is equal to
$$x\cdot\frac{\sin c_x}{c_x}.$$
As $x\to 3$, $\sin{c_x}\to\sin 3$ and $c_x\to 3$, so our limit is $\sin 3$.



Monday, March 25, 2019

Limit of a sequence defined by recursive relation : $ a_n = sqrt{a_{n-1}a_{n-2}}$



We're given a sequence defined by the recursive relation: $$a_n = \sqrt{a_{n-1}a_{n-2}}$$




$a_1$ and $a_2$ are positive constants.
We have to show the following:




  1. The sequences $\{ b_n \} = \{ a_{2n-1} \}$ and $\{ c_n \} = \{ a_{2n} \} $ are monotonic, and if one is increasing, the other is decreasing.


  2. The limit of the sequence $\{ a_n \} $ is $ \left(a_1a_2^2\right)^{\frac13} $








Now, I have proved the first part. Besides that, I have also proved a few other things:



If $a_1 > a_2$, then:



a. $ \{ b_n \} $ decreases, and $ \{ c_n \} $ increases.



b. $ c_n < b_n $



If $a_1 < a_2$, we just flip $\{ b_n \}$ and $\{c_n\}$




Besides, I have also shown that both the sequences : $\{b_n\}$ and $\{ c_n\}$ have the same limit. What I don't know, is how to evaluate the limit.


Answer



Using the relation $a_{n} = \sqrt{a_{n-1}a_{n-2}}$ for $n \geqslant 2$, we find that



$$a_{n+1}^2a_n = (\sqrt{a_na_{n-1}})^2a_n = a_na_{n-1}a_n = a_n^2a_{n-1}$$



is independent of $n$, so $a_{n+1}^2a_n = a_2^2a_1$ for all $n$, and hence $\lambda^3 = a_2^2a_1$ for $\lambda = \lim\limits_{n\to\infty} a_n$.


Sunday, March 24, 2019

calculus - Are there other cases similar to Herglotz's integral $int_0^1frac{lnleft(1+t^{4+sqrt{15}}right)}{1+t} mathrm dt$?



This post of Boris Bukh mentions amazing Gustav Herglotz's integral
$$\int_0^1\frac{\ln\left(1+t^{\,4\,+\,\sqrt{\vphantom{\large A}\,15\,}\,}\right)}{1+t}\ \mathrm dt=-\frac{\pi^2}{12}\left(\sqrt{15}-2\right)+\ln2\cdot\ln\left(\sqrt3+\sqrt5\right)+\ln\frac{1+\sqrt5}{2}\cdot\ln\left(2+\sqrt3\right).
$$

I wonder if there are other irrational real algebraic exponents $\alpha$ such that the integral
$$
\int_{0}^{1}
\frac{\ln\left(1 + t^{\,{\large\alpha}}\right)}{1 + t}\,{\rm d}t
$$
has a closed-form representation? Is there a general formula giving results for such cases?



Are there such algebraic $\alpha$ of degree $> 2$ ?


Answer



Here is a list of some of these integrals:

$$
\begin{align}
\int_0^1\frac{\log\left(1+t^{2+\sqrt{3}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{12}\left(1-\sqrt{3}\right)+ \log 2 \log\left(1+\sqrt{3}\right)\\
\\
\int_0^1\frac{\log\left(1+t^{3+\sqrt{8}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{24}\left(3-\sqrt{32}\right)+ \frac{1}{2}\log 2 \log\left(2\left(3+\sqrt{8}\right)^{3/2}\right)\\
\\
\int_0^1\frac{\log\left(1+t^{4+\sqrt{15}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{12}\left(2-\sqrt{15}\right)+ \log\left(\frac{1+\sqrt{5}}{2}\right)\log\left(2+\sqrt{3}\right)+\\

&+\log 2 \log\left(\sqrt{3}+\sqrt{5}\right)\\
\\
\int_0^1\frac{\log\left(1+t^{5+\sqrt{24}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{24}\left(5-\sqrt{96}\right)+ \frac{1}{2} \log\left(1+\sqrt{2}\right)\log\left(2+\sqrt{3}\right)+\\
&+\frac{1}{2}\log 2 \log\left(2\left(5+\sqrt{24}\right)^{3/2}\right)\\
\\
\int_0^1\frac{\log\left(1+t^{6+\sqrt{35}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{12}\left(3-\sqrt{35}\right)+ \log\left(\frac{1+\sqrt{5}}{2}\right)\log\left(8+3\sqrt{7}\right)+\\
&+\log 2 \log\left(\sqrt{5}+\sqrt{7}\right)\\
\\

\int_0^1\frac{\log\left(1+t^{8+\sqrt{63}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{12}\left(4-\sqrt{63}\right)+ \log\left(\frac{5+\sqrt{21}}{2}\right)\log\left(2+\sqrt{3}\right)+\\
&+\log 2 \log\left(3+\sqrt{7}\right)\\
\\
\int_0^1\frac{\log\left(1+t^{11+\sqrt{120}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{24}\left(11-\sqrt{480}\right)+ \frac{1}{2} \log\left(1+\sqrt{2}\right)\log\left(4+\sqrt{15}\right)+\\
&+\frac{1}{2}\log\left(2+\sqrt{3}\right)\log\left(3+\sqrt{10}\right)+\\
&+ \frac{1}{2}\log\left(\frac{1+\sqrt{5}}{2}\right)\log\left(5+\sqrt{24}\right)+\\
&+ \frac{1}{2}\log 2 \log\left(2\left(11+\sqrt{120}\right)^{3/2}\right)\\
\\

\int_0^1\frac{\log\left(1+t^{12+\sqrt{143}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{12}\left(6-\sqrt{143}\right)+ \log\left(\frac{3+\sqrt{13}}{2}\right)\log\left(10+3\sqrt{11}\right)+\\
&+\log 2 \log\left(\sqrt{11}+\sqrt{13}\right)\\
\\
\int_0^1\frac{\log\left(1+t^{13+\sqrt{168}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{24}\left(13-\sqrt{672}\right)+ \frac{1}{2} \log\left(1+\sqrt{2}\right)\log\left(\frac{5+\sqrt{21}}{2}\right)+\\
&+\frac{1}{4}\log\left(2+\sqrt{3}\right)\log\left(15+\sqrt{224}\right)+\\
&+\frac{1}{4}\log\left(5+\sqrt{24}\right)\log\left(8+\sqrt{63}\right)+\\
&+ \frac{1}{2}\log 2 \log\left(2\left(13+\sqrt{168}\right)^{3/2}\right)\\
\\

\int_0^1\frac{\log\left(1+t^{14+\sqrt{195}}\right)}{1+t}\operatorname{d}t
=& \frac{\pi^2}{12}\left(7-\sqrt{195}\right)+ \log\left(\frac{1+\sqrt{5}}{2}\right)\log\left(25+4\sqrt{39}\right)+\\
&+\log \left(\frac{3+\sqrt{13}}{2}\right) \log\left(4+\sqrt{15}\right)+\\
&+\log 2 \log\left(\sqrt{15}+\sqrt{13}\right)\\
\end{align}
$$


real analysis - Is there a monotonic function discontinuous over some dense set?




Can we construct a monotonic function $f : \mathbb{R} \to \mathbb{R}$ such that there is a dense set in some interval $(a,b)$ for which $f$ is discontinuous at all points in the dense set? What about a strictly monotonic function?




My intuition tells me that such a function is impossible.



Here is a rough sketch of an attempt at proving that such a function does not exist: we could suppose a function satisfies these conditions. Take an $\epsilon > 0$ and two points $x,y$ in this dense set such that $x


However, after this point, I am stuck. Could we somehow partition $(x,y)$ into $n$ subintervals and conclude that there must be some point on the dense set that is continuous?


Answer



Such a function is possible.



Let $\Bbb Q=\{q_n:n\in\Bbb N\}$ be an enumeration of the rational numbers, and define



$$f:\Bbb R\to\Bbb R:x\mapsto\sum_{q_n\le x}\frac1{2^n}\;.\tag{1}$$



The series $\sum_{n\ge 0}\frac1{2^n}$ is absolutely convergent, so $(1)$ makes sense. If $x


$$\lim_{x\to {q_n}^-}f(x)=\sum_{q_k

Thus, $f$ is discontinuous on a set that is dense in $\Bbb R$ (and in every open interval of $\Bbb R$).


trigonometry - Complex number + trigo : $-1 + tan(3)i$ , find modulus and argument




I have $-1 + \tan(3)i$ and must find its modulus and its argument. I tried to solve it by myself for hours, and then I looked at the answer, but I am still confused with a part of the solution.



Here is the provided solution:
$$\begin{align}
z &= -1 + \tan(3)i \\
&= -1 + \frac{\sin(3)}{\cos(3)}i \\
&= \frac1{\left|\cos(3)\right|} ( \cos(3) + i(-1)\sin(3)) \\
&= \frac1{\left|\cos(3)\right|} e^{-3i} \\
&= \frac1{\left|\cos(3)\right|} e^{(2\pi-3)i}

\end{align}$$



I don't understand how we get to $$ \frac1{\left|\cos(3)\right|}(\cos(3) + i(-1)\sin(3)) $$
How did they get this modulus $1/|\cos(3)|$, and the $-1$ in the imaginary part? How did they reorder the previous expression to obtain this?



I also don't see why they developed the last equality. They put $2\pi-3$ instead of $-3$; OK, it is the same, but what was the aim of a such development?



Thanks!


Answer



Let $z = -1 + \tan(3) \ i$. In the complex plane, this would be the point $(-1, \tan(3))$, which has length




$$|z| = \sqrt{(-1)^2 + \tan^2(3)} = \sqrt{1 + \frac{\sin^2(3)}{\cos^2(3)}} = \sqrt{\frac{1}{\cos^2(3)}} \sqrt{\cos^2(3) + \sin^2(3)} = \frac{1}{|\cos(3)|}$$



For the last equality, we used $\sin^2(x) + \cos^2(x) = 1$. Now we want to write



$$z = |z| \ e^{\omega i} = |z| \ (\cos(\omega) + i \sin(\omega))$$



for some $\omega$. It turns out this can be done easily by writing



$$z = \frac{1}{\cos(3)}(-\cos(3) + i \sin(3)) = \frac{1}{|\cos(3)|}(\cos(-3) + i \sin(-3)) = \frac{1}{|\cos(3)|} e^{-3i}$$




Since $-3 \notin [0, 2\pi)$ they decided to add $2 \pi$ to the angle, so that it is inside this interval.


Saturday, March 23, 2019

abstract algebra - How to find the roots of $x³-2$?



I'm trying to find the roots of $x^3 -2$, I know that one of the roots are $\sqrt[3] 2$ and $\sqrt[3] {2}e^{\frac{2\pi}{3}i}$ but I don't why.
The first one is easy to find, but the another two roots?




I need help



Thank you


Answer



If $\omega^3 = 1$ and $x^3 = 2$ then $(\omega x)^3 = \omega^3 x^3 = 2$.



Possible values of $\omega$ are $e^{\frac{1}{3}2 i \pi}$, $e^{\frac{2}{3}2 i \pi}$ and $e^{\frac{3}{3}2 i \pi}$. This is because $1 = e^{2 i \pi} = (e^{\frac{1}{k} 2 i \pi})^k$.



So the solutions of $x^3 - 2 = 0$ are $e^{\frac{1}{3}2 i \pi} \sqrt[3]{2}$, $e^{\frac{2}{3}2 i \pi} \sqrt[3]{2}$ and $\sqrt[3]{2}$.


summation - Prove by induction: $sumlimits_{i=1}^{n}(4i+1) = 2n^2 + 3n$


Prove by induction:
$$\sum\limits_{i=1}^{n}(4i+1) = 2n^2 + 3n$$





It's just the numbers that confuse me; I know how to do a simple induction proof that first for $p(k)$ and then for $k+1$ etc but I don't know how to do it with the form?

Friday, March 22, 2019

elementary number theory - Name of 'mod' or 'remainder' operation in mathematics




I am trying to write the Euclidean algorithm in the following way:



$A = \lfloor A \div B \rfloor \times B + (\text{remainder of}) \: A \div B $



Now is there any symbol I can use to say "remainder of A $\div$ B"? I know that in the C programming language there is the operator % for modulus; is that a valid symbol in maths? Can I write A % B? Or is there some other way?


Answer



Per request, I post my comment(s) as an answer, and add some further remarks.



The notation $\, a\bmod b\, $ denotes the remainder when dividing $\,a\,$ by $\,b\,$ using the division algorithm. The same notation is used in other rings that have an analogous (Euclidean) Division Algorithm, e.g. polynomials with coefficients over a field, e.g. we can write the Polynomial Remainder Theorem as $\,f(a) = f(x)\bmod x\!-\!a,\,$ or higher-degree forms like $\,f(i) = (f(x)\bmod x^2\!+\!1)\bmod x\!-\!i$.




Also $\!\bmod\!$ is used as a ternary relation (vs. above binary operation) when dealing with congruences, i.e. $\ a\equiv b\pmod{\! n}\iff n\mid a-b\,$ (an equivalence relation for a fixed modulus $\,n).$



These two denotations of $\!\bmod\!$ are related as follows



$$ a\equiv b\!\!\!\pmod{\!n}\iff a\bmod n\, =\, b\bmod n$$



so $\,a\bmod n\,$ serves as a normal form or canonical representative for the entire equivalence class $\,[a]_n = a + n\Bbb Z\,$ of all integers $\,\equiv a\pmod{\!n}$. So the above displayed equivalence means that testing congruence of integers reduces to testing equality of their normal forms (= remainders $\!\bmod n),\,$ just as we can test equivalence of fractions by testing equality of their least-terms normal forms. Similarly we can view the remainder as a "least terms" rep: it is the least nonnegative integer in the class $[a]_n.\,$



The operational use of mod is often more convenient in computational contexts, whereas the relational use often yields more flexibility in theoretical contexts. The difference amounts to whether it is more convenient to work with general equivalence classes vs. canonical / normal representatives ("reps") thereof. For example, it would be quite cumbersome to state the laws of fraction arithmetic if we required that all fractions to be in normal (reduced) form, i.e. in lowest terms. Instead, it proves more convenient to have the flexibility to work with arbitrary equivalent fractions. For example, this allows us to state the fraction addition rule in very simple form by first choosing convenient reps having a common denominator.




Analogously, in modular arithmetic the canoniocal remainder $\,a\ {\rm mod}\ m\,$ may not be the most convenient choice of representative of the equivalence class $\,[a]_n =\, a + n\,\Bbb Z.\,$ For example, casting out elevens exploits that $\ {\rm mod}\ 11\!:\ 10\equiv -1\,\Rightarrow\,10^{\large k}\equiv (-1)^{\large k}\equiv \pm1,\,$ which involves choosing a rep of least magnitude $\,\color{#c00}{\bf -1}\,$ vs. $\,\color{#0a0}{10}\in [10]_{11}\! = \{\ldots,\, -23,-12,\color{#c00}{\bf -1},\color{#0a0}{10},21,\,\ldots\}.\,$ Or, as here we can choose reps that conveniently make a quotient be exact when computing modular fractions, e.g. $\!\bmod 11\!:\,\ 9/13\equiv -2/2\equiv -1.\,$
Hence, analogous to fraction addition, we chose reps which simplified arithmetic. Using least magnitude reps often simplifies other computations too, e.g. it can halve the number of steps in the Euclidean algorithm. Thus the use of congruence classes (vs. canonical reps) provides much greater flexibility, which may yield great simplifications - not only computationally, but also theoretically, which becomes clearer when one studies quotient rings, which yield (algebraic) structure reifications of the the congruence rules = compatibility of congruences with addition and multiplication).



Beware that some authors omit the parentheses in $\, a\equiv b\pmod{\!n}$ instead writing them as follows $\,a\equiv b\mod n\ $ or $\ a = b\mod n,\ $ using \mod vs. \pmod in $\TeX$. These might easily be confused with $\,a = b\bmod n\,$ i.e. $\,a = (b\bmod n),\,$ so one should keep in mind such possible ambiguities in contexts where both forms of $\!\bmod\!$ are in use.



The name '%' for the normal form $\!\bmod\!$ operation (as in the C programming language) has not percolated to the mathematical community as far as I can tell. I recall many questions on sci.math regarding the meaning of $\rm\, a\,\%\, b.\, $ As such, if you use this notation in a mathematical forum then I recommend that you specify its meaning. This would not be necessary for $\!\bmod\!$ since that notation is ubiquitous in mathematics (currently more so for congruence than operator form). Be aware, however, that some mathematicians look down on the operational use of mod in the case when it would be more natural to use the congruence form. Apparently the mathematical Gods do too, since doing so can make some proofs quite more difficult (much more so than the above simple case of fraction addition).


complex analysis - Various evalutions of $int_0^infty sin x sin sqrt{x} ,dx$

I'm looking for various ways to evaluate the integral:
$$\int_0^\infty \sin x\sin \sqrt{x}\,dx$$



I'm mainly interested in complex analysis. I can think of a wedge -shaped contour of angle $\pi/4$ but I'm having trouble constructing the integrand properly. Perhaps we have to take into account that the root here may cause some trouble and define a function having a branch and this complicates things.



I know two solutions using real analysis. One uses Laplace transformations and the other using only elementary tools plus the known results of the Fresnel Integrals.



Can someone help me with the contour integration?
Thank you!

calculus - Prove inequality using the Mean Value Theorem



I'm trying to hone my problem-solving skills using the Mean Value Theorem and in one exercise, where $x \in (0, +\infty)$, I have to prove that:





  • $(1+x)^a>ax+1$,   if $a > 1$.

  • $(1+x)^a



What I've tried:



I've tried to solve this problem using the function $f(t)=(1+t)^a$ in the closed set $[0,x]$ as follows:





  • First, I calculated the derivative of $f$, which is $f'(t)=a(1+t)^{a-1}$.

  • Then, I used the Mean Value Theorem:
    $$
    f'(k)={{f(x)-f(0)}\over{x-0}}={{(1+x)^a-1}\over{x}}
    \Leftrightarrow
    a(1+k)^{a-1}={{(1+x)^a-1}\over{x}}\\\Leftrightarrow
    (1+x)^a=ax(1+k)^{a-1}+1
    $$




The equation I found seems to be on the right track, so I decided based on instinct to examine the following cases:




  • $a=1 \Rightarrow (1+x)^a=ax+1$

  • $a>1 \Rightarrow (1+x)^a>ax+1$

  • $a \in (0, 1) \Rightarrow (1+x)^a



Question:



My solution, and more specifically the part where my instinct kicks in, feels rather incomplete and rushed. Is there a better way to solve this problem using the Mean Value Theorem?


Answer



Your “instinct” is correct, and it requires only small additions to
make it a full proof.



The mean value theorem implies that for $x > 0$
$$
(1+x)^\alpha = 1 + \alpha x (1+k)^{\alpha-1}

$$
for some $k \in (0, x)$. It is relevant that $k$ is strictly positive,
so that one can continue to argue
$$
\alpha > 1 \Longrightarrow (1+k)^{\alpha-1} > 1
\Longrightarrow (1+x)^\alpha > 1 + \alpha x \, , \\
0 < \alpha < 1 \Longrightarrow (1+k)^{\alpha-1} < 1
\Longrightarrow (1+x)^\alpha < 1 + \alpha x \, .
$$


Thursday, March 21, 2019

calculus - How to show that $sqrt{x}$ grows faster than $ln{x}$.



So I have the limit $$\lim_{x\rightarrow \infty}\left(\frac{1}{2-\frac{3\ln{x}}{\sqrt{x}}}\right)=\frac{1}2,$$ I now want to motivate why $(3\ln{x}/\sqrt{x})\rightarrow0$ as $x\rightarrow\infty.$ I cam up with two possibilites:




  1. Algebraically it follows that $$\frac{3\ln{x}}{\sqrt{x}}=\frac{3\ln{x}}{\frac{x}{\sqrt{x}}}=\frac{3\sqrt{x}\ln{x}}{x}=3\sqrt{x}\cdot\frac{\ln{x}}{x},$$
    Now since the last factor is a standard limit equal to zero as $x$ approaches infinity, the limit of the entire thing should be $0$. However, isn't it a problem because $\sqrt{x}\rightarrow\infty$ as $x\rightarrow \infty$ gives us the indeterminate value $\infty\cdot 0$?


  2. One can, without having to do the arithmeticabove, directly motivate that the function $f_1:x\rightarrow \sqrt{x}$ increases faster than the function $f_2:x\rightarrow\ln{x}.$ Is this motivation sufficient? And, is the proof below correct?





We have that $D(f_1)=\frac{1}{2\sqrt{x}}$ and $D(f_2)=\frac{1}{x}$. In order to compare these two derivateives, we have to look at the interval $(0,\infty).$ Since $D(f_1)\geq D(f_2)$ for $x\geq4$, it follows that $f_1>f_2, \ x>4.$


Answer




  1. This is a standard result from high school

  2. If you nevertheless want to deduce it from the limit of $\dfrac{\ln x}x$, use the properties of logarithm:
    $$\frac{\ln x}{\sqrt x}=\frac{2\ln(\sqrt x)}{\sqrt x}\xrightarrow[\sqrt x\to\infty]{}2\cdot 0=0$$


elementary number theory - Prove without induction that $3^{4n}-2^{4n}$ is divisible by $65$





My brother asked me this (for some reason).



My solution is:



$(3^{4n}-2^{4n})\bmod{65}=$



$(81^{n}-16^{n})\bmod{65}=$



$((81\bmod{65})^{n}-16^{n})\bmod{65}=$




$(16^{n}-16^{n})\bmod{65}=$



$0\bmod{65}$






I think that this solution is mathematically flawless (please let me know if you think otherwise).



But I'm wondering if there's another way, perhaps with the binomial expansion of $(81-16)^{n}$.




In other words, something like:



$3^{4n}-2^{4n}=$



$81^{n}-16^{n}=$



$(81-16)^{n}+65k=$



$65^{n}+65k=$




$65(65^{n-1}+k)$



How would I go from "$81^{n}-16^{n}$" to "$(81-16)^{n}+65k$"?


Answer



You can use the formula $$a^n-b^n = (a-b)\left(a^{n-1}+a^{n-2}b+a^{n-3}b^2+\ldots+ab^{n-2}+b^{n-1}\right)$$


real analysis - Limit of the sequence given by $x_{n+1}=x_n-x_n^{n+1}$



Let , $x_1 \in (0,1)$ be a real number. For $n>1$ define $x_{n+1}=x_n-x_n^{n+1}$. Then prove that $\displaystyle \lim_{n \to \infty} x_n$ exists.



We have to prove that the given sequence $\{x_n\}$ is convergent. So we have to show that $\{x_n\}$ is monotone and bounded.


I proved that the sequence is monotone decreasing. But I'm unable to show that it is bounded below. How can I show it ?



Any other way to prove that the limit exists ?


Answer



We show by induction that $x_n \in (0,1)$ for all $n$:


The case $n=1$ is clear.


Now let $n \in \mathbb N$ and $x_n \in (0,1)$


Then: $x_{n+1}=x_n(1-x_n^n)$. From $x_n \in (0,1)$ we get $x_n^n \in (0,1)$ and therefore $1-x_n^n \in (0,1)$.


Consequence: $x_{n+1} \in (0,1)$.


real analysis - Show that the sequence $left(frac{2^n}{n!}right)$ has a limit.




Show that the sequence $\left(\frac{2^n}{n!}\right)$ has a limit.



I initially inferred that the question required me to use the definition of the limit of a sequence because a sequence is convergent if it has a limit $$\left|\frac{2^n}{n!} - L \right|{}{}<{}{}\epsilon$$




I've come across approaches that use the squeeze theorem but I'm not sure whether its applicable to my question. While I have found answers on this site to similar questions containing the sequence, they all assume the limit is $0$.



I think I need to show $a_n \geq a_{n+1},\forall n \geq 1$, so



$$a_{n+1} = \frac{2^{n+1}}{(n+1)!}=\frac{2}{n+1}\frac{2^{n}}{n!}

A monotonic decreasing sequence is convergent and this particular sequence is bounded below by zero since its terms are postive. I'm not sure whether or not I need to do more to answer the question.


Answer



It is easy to prove that

$$\sum_{k=1}^{\infty}\frac{2^n}{n!} \lt \infty$$
e.g. with the ratio test you have
$$\frac{a_{n+1}}{a_n}=\frac{n!}{2^n}\cdot\frac{2^{n+1}}{(n+1)!}=\frac{2}{n+1}\longrightarrow 0$$
Then $\lim\limits_{n\rightarrow\infty}\frac{2^n}{n!}$ has to be $0$






If you do not know anything about series you might assert, that $n!>3^n$ if $n\gt N_0$ for some fixed $N_0\in\mathbb{N}$. Therefore you have for those $n$
$$0<\frac{2^n}{n!} \lt\frac{2^n}{3^n}=\left(\frac{2}{3}\right)^n\longrightarrow 0$$
Thus $$\lim\limits_{n\longrightarrow\infty} \frac{2^n}{n!} =0 $$



Tuesday, March 19, 2019

Find sum of the infinite series

$$\frac{4}{20}+\frac{4 \cdot 7}{20 \cdot 30}+\frac{4 \cdot 7 \cdot 10}{20 \cdot 30 \cdot 40}+\dots$$



How to find the nth term of this series? In particular, how to write the series as a summation in terms of some variable (say n) where n ranges from 0 to infinity? I think finding the sum of series would then be easier (or probably that is the task to be done). Please elaborate how to proceed.

discrete mathematics - Using Direct Proof. $1+2+3+ldots+n = frac{n(n + 1)}{2}$


I need help proving this statement. Any help would be great!


Answer




Here is an approach.


$$ s_n =1+2+3+\dots+(n-1)+n \\ s_n =n+(n-1)+(n-2)+\dots+1 . $$


Adding the above gives


$$2s_n = (1+n)+(2+(n-1))+(3+(n-2))+\dots+(1+n) $$


$$ =(1+n)+(1+n)+\dots+(1+n) $$


The above is nothing but adding $(1+n)$ n times and the result follows


$$ \implies s_n = \frac{n(n+1)}{2}. $$


sequences and series - How to calculate $S_{n}=sum_{i=1}^{n}frac{1}{i^{2}}$

In math lesson, my teacher told me that Euler once used a very delicate method to calculate $\displaystyle S_{n}=\sum_{i=1}^{n}\frac{1}{i^{2}}$ and wrote a paper about it.




I wonder how to calculate this. I need a precise answer, not the answer that it's less than a number such as 2.

Monday, March 18, 2019

Right inverse is also left inverse for nonsquare matrices?

If $m≠n$ and we have the matrices $A$ $(m\times{n})$, $B$ $(n\times{m})$ and $C$ $(n\times{m})$ such that $AB=I(m\times{m})$ and $CA=I(n\times{n})$, does $B=C$?



I know the proof that it is true if we are talking about square matrices, but it doesn't help in this case.

linear algebra - Proof clarification for inverse of matrix



This is from Artin Algebra, proposition 1.2.20:





Proposition: Let $A$ be a square matrix that has either a left inverse or a right inverse, a matrix $B$ such that either $BA=I$ or $AB=I$. Then $A$ is invertible and $B$ is its inverse.



Proof: Suppose $AB=I$. We perform row reduction on $A$. Say $A'=PA$, where $P=E_k\cdots E_1$ is the product of corresponding elementary matrices, and $A'$ is a row echelon matrix. Then $A'B=PAB=P$. Because $P$ is invertible, its bottom row is not zero. Then bottom row of $A'$ can't be zero either. $\dots$




I can't understand how bottom row of $A'$ can't be zero.


Answer



Suppose the bottom row of $A'$ is zero. Then, carrying out the matrix multiplication, you will see that the bottom row of $A'B$ is zero too.




If you're comfortable with block matrix partitioning, suppose



$$
A' = \begin{bmatrix} A'_1 \\ 0\end{bmatrix}, B = \begin{bmatrix} B_1 & B_2 \end{bmatrix}
$$



Then



$$

A'B = \begin{bmatrix} A'_1 \\ 0\end{bmatrix}\begin{bmatrix} B_1 & B_2 \end{bmatrix} = \begin{bmatrix} A_1' B_1 & A_1' B_2 \\ 0 & 0 \end{bmatrix}
$$



Which has a row of zeros.


soft question - Examples of mathematical results discovered "late"

What are examples of mathematical results that were discovered surprisingly late in history? Maybe the result is a straightforward corollary of an established theorem, or maybe it's just so simple that it's surprising no one thought of it sooner.



The example that makes me ask is the 2011 paper John Baez mentioned called "Two semicircles fill half a circle", which proves a fairly simple geometrical fact similar to those that have been pondered for thousands of years.

Sunday, March 17, 2019

limits - $limlimits_{nto infty}frac{n^k}{2^n}$ without l'Hopital





Let $\varepsilon >0$. Let $N>?$. For any integer $n>N$ we have
$$\frac{n^k}{2^n}<\varepsilon.$$
I don't know how to proceed here sensically.



I'd say we have to start with




$$<\frac{n^k}{2^N}$$



But what do we do here about the $n^k$?



Remember, I don't want to use l'Hopital or for me unproven limit laws like "exponents grow faster than powers"



Also, I don't want to use hidden l'Hopital, i.e. argumenting with derivatives. Since we don't even have proven derivative laws.


Answer



Let's take the logarithm. One gets




$$\log{n^k\over 2^n}=k\log{n}-n\log{2}=-n\left(\log{2}-k{\log{n}\over n}\right)$$



Now when $n\to\infty$ one has $\log{n}/n\to 0$ and so



$$\lim_{n\to\infty}\log{n^k\over 2^n}=-\infty$$



And so



$$\lim_{n\to\infty}{n^k\over 2^n}=0$$


convergence divergence - Closed form of :$S n,m= sum_{k=1}^n (-1)^kbinom{n}{k}k^{-m!}$



I don't succed to get a closed form of the bellow sum using standard Binomial law , in order to know if this sum could be converge or not for $n\to +\infty$ ,is there any simple way or any algorithm to eavaluate the bellow sum :



$$S n,m=
\sum_{k=1}^n (-1)^k\binom{n}{k}k^{-m!}$$

?


Answer



The convergence of $S_{n,m}$ can easily be determined when applying the so-called Dilcher's fromula



$$
\sum_{1\le n_1\le\cdots\le n_M\le n}\;\prod_{j=1}^{M}\frac{1}{n_j}
=\sum_{k=1}^{n}\binom{n}{k}\cdot\frac{(-1)^{k-1}}{k^M},
$$



where $M,n\in\mathbb N$ (for more detail, see http://mathworld.wolfram.com/DilchersFormula.html).




Finally, set $M=m!$, $m\in\mathbb N$, to obtain



$$
-S_{n,m}
=\sum_{1\le n_1\le\cdots\le n_{m!}\le n}\;\prod_{j=1}^{m!}\frac{1}{n_j}
\ge\sum_{k=1}^{n}\frac{1}{k}.
$$



Consequently, one deduces that $S_{n,m}\to -\infty$ as $n\to\infty$.



discrete mathematics - Factorising polynomials in to their irreducible factors over GF(2)



A skill I'm trying to learn/understand is how to do this manually. I've noticed as a predecessor to a lot of my discrete maths example questions we are asked to obtain the irreducible factorisation of things like $$x^{15}-1, x^{18}-1, x^{20}-1 \in\mathbb{F_2[x]}.$$




What approach should one take generally when trying to do something like this? I've tried practising with smaller ones which are easier to do but I can't really see a general technique.



For $x^{18}-1=(x^9-1)^2=(x^3-1)^2(x^6+x^3+1)^2=(x-1)^2(x^2+x+1)^2(x^6+x^3+1)^2$
This one I mostly understand, aside from the $x^6$ factor, how could you know this was an irreducible polynomial in this field which out memorising them all? Or is that the technique..



Then, for $x^{15}-1$ I am told to assume that the primitive quartic polynomials in $\mathbb{F}_2[x]$ are $x^4+x+1$ and $x^4+x^3+1$.



I can check on wolfram alpha that the factorisation looks like:
$x^{15}-1=(x+1)(x^2+x+1)(x^4+x+1)(x^4+x^3+1)(x^4+x^3+x^2+x+1)$




Now my hint suggested there are only $2$ quartic polynomials but this makes use of $3$...unless the $3$rd one is not primitive?



And also, how could one find this factorisation easily by hand? The one above I could attempt trial and error to find the irreducible polynomial of degree $6$ perhaps but for this one that seems a bit strange.



Any advice is greatly appreciated.


Answer



Note that from the theory of finite fields,



Theorem





The finite field $\mathbb{F}_{p^n}$ is a splitting field of the polynomial $x^{p^n}-x$.




From this theorem, every element of $\mathbb{F}_{p^n}$ is a root of the polynomial $x^{p^n}-x$. We can apply this to $\mathbb{F}_{2^4}$ and $x^{2^4}-x$.



$$
\begin{align}
x^{2^4}-x&=x(x^{15}-1)\\
&=x(x-1)g(x)

\end{align}
$$
where $g(x)$ contains factors $(x-\alpha)$ for each $\alpha\in\mathbb{F}_{2^4} \backslash \mathbb{F}_2$.



Since $\mathbb{F}_{2^4}$ contains $\mathbb{F}_{2^2}$, we consider an irreducible polynomial of degree $2$ over $\mathbb{F}_2$. Then
$$
g(x)=(x^2+x+1)h(x)$$
where $h(x)$ contains factors $(x-\alpha)$ for each $\alpha\in \mathbb{F}_{2^4}\backslash \mathbb{F}_{2^2}$. Each such $\alpha$ must be of degree $4$ over $\mathbb{F}_2$. Thus, $h(x)$ must contain an irreducible polynomial of degree $4$ over $\mathbb{F}_2$. Also, there are exactly three of them, $x^4+x^3+1$, $x^4+x+1$, and $x^4+x^3+x^2+x+1$. This gives the factorization
$$
x^{15}-1=(x+1)(x^2+x+1)(x^4+x^3+1)(x^4+x+1)(x^4+x^3+x^2+x+1).

$$


elementary number theory - Proving that $gcd(2^m - 1, 2^n - 1) = 2^{gcd(m,n )} - 1$







$$\gcd(2^m-1,2^n-1)=2^{\gcd(m,n)}-1.$$




I had never seen this before, so I started trying to prove it. Without success...



Can anyone explain me (so actually prove) why this equation is true?



And can we say the same when replacing the '$2$' by any integer number '$a$'?


Answer




In general, if $p=\gcd(m,n)$ then $p=mx+ny$ for some integers $x,y$.



Now, if $d = \gcd(2^m-1,2^n-1)$ then $2^m \equiv 1 \pmod d$ and $2^n \equiv 1\pmod d$ so $$2^p = 2^{mx+ny} = (2^m)^x(2^n)^y \equiv 1 \pmod d$$



So $d\mid 2^p-1$.



On the other hand, if $p\mid m$ then $2^p-1\mid 2^m-1$ so $2^p-1$ is a common factor.



And yes, you can replace $2$ with any $a$.


Saturday, March 16, 2019

geometry - Quadrilateral in a Parallelogram - Interesting Proofs!




Here's an interesting problem, and result, that I wish to share with the math community here at Math SE. I think I've found a proof without words...



enter image description here



I came across this problem sometime back, and here's a possible proof without words that I'll post as a picture -



enter image description here



I claim that one can, for any inscribed quadrilateral EFGH, make necessary constructions (several parallelograms) as in the figure, and hence show that at least one of the diagonals is parallel to a side of the quadrilateral ABCD.




Is there anything I'm missing, or is the proof complete? Let me know!



Also, please post other proofs in the answers section! (So we can all solve the problem together and discuss several methods for the same - it'll be of use to everyone to know all possible methods of approaching this problem)



I was thinking about a possible solution using complex numbers, assuming one of the side pair to be parallel to the real axis in Argand's plane, for simplicity. A similar solution, not involving too much calculation can be done using vectors!
I'm not sure about coordinate geometry, as it tends to get quite cumbersome in such situations, however if anyone does manage to prove it neatly, please post the solution.



Share your ideas!


Answer



I think the figure proof is applying the method of contradiction.




Suppose that 2[quad EFGH] = [//gm ABCD] and no one of the diagonals is parallel to the sides of //gm ABCD. Then, we can draw parallel lines EPS, PQH … etc to form several parallelograms as shown.



In that case, $[\triangle AFE] = [\triangle SFE] = \dfrac 12$ [//gm AFSE] and ….



Clearly, there is no region to pair up with parallelogram PQRS to make the supposition valid. If that given condition must be maintained, PQRS must be deformed to a region with zero area. Then, we have



Case-1 PQRS is made up of EP(Q)S(R)G, which is in fact EG and is parallel to the side AB.



Case-2 PQRS is made up of FS(P)Q(R)H, …..




Case-2 PQRS is a just the point T with ETG // AB and FTH // BC..


algebra precalculus - How to determine linear function from "at a price (p) of 220 the demand is 180"


In my Math book I can look-up the answers for exercises. And as I do I have no idea how I would solve the following example. Probably my mind is stuck as I don't find new ways to think about the issue.


"The supply function of a good is a linear function. At a price (p) of 220 the demand is 180 units. At a price of 160 the demand is 240 units."


  1. Determine the demand function.

"Also the supply function is linear. At a price of 150 the supply is 100 units and at a price of 250 the supply is 300 units".


  1. Determine the supply function.

Could someone explain to me how I would approach to solve these two questions as the book doesn't provide the explanation but only the answers? Thank you.



Answer



You know that the demand function is a linear function of price $p$, say $D(p)=\alpha\cdot p+\beta$ for suitable parameters $\alpha,\beta\in\mathbb R$. From the conditions given in your problem, you know that


$$ D(220)=\boldsymbol{220\alpha+\beta=180}\qquad\text{and}\qquad D(160)=\boldsymbol{160\alpha+\beta=240}. $$


From the bold equations (system of two linear equations with two variables), one simply obtains the coeffcients $\alpha=-1$, $\beta=400$ that enables you to write down the demand function as $D(p)=400-p$.


In fact, the same can be done for the supply function.


modular arithmetic - How to find remainder of denominator is greater than numerator?



I am learning modular arithmetic and trying to figure out, how to find remainder where denominator is greater than numerator?



For example:



i) $2 \bmod 5 =$ ?




I tried to solve this but I got 0 as remainder whereas in calculators it is $2$ . I was solving it with regular math operators like adding 0 and value after points.



ii) $-2 \bmod 5$ = ?



Also I wanted to know, how to handle negative number in modular arithmetic?


Answer



Remember, by definition, the remainder when dividing $m/n$ is such a number $r$ such that





  1. $0\leq r
  2. There exists some $k$ such that $k\cdot n + r = m$



by that definition, the remainder when dividing $2$ by $5$ is $2$, because $$0\cdot 5 + 2 = 2$$






As far as modular arithmetic is concerned, remember that $$x\equiv y\mod n$$
if and only if $n|x-y$. As a consequence, it is always true that $$x+k\cdot n\equiv x\mod n$$ for any integer $k$. In your case, that means that $$-3\equiv -3+1\cdot 5\mod 5$$



algebra precalculus - What is wrong with this proof? -3 = 3



What is wrong with this proof?



$-3 = \sqrt[3]{-27} = {(-27)}^{\frac 13} = {(-27)}^{\frac 26} = \sqrt[6]{{(-27)}^2} = \sqrt[6]{{27}^2} = {(27)}^{\frac 26} = {(27)}^{\frac 13} = \sqrt[3]{27} = 3$



This is obviously false since $-3 \neq 3$.




But still I can't figure out which equation is the wrong one and why that is.



Thanks in advance for anyone who will help.


Answer



As suggested in comments, the $(-27)^{2/6} \neq ((-27)^2)^{1/6}$, because $a^{bc}=(a^b)^c$ does not generally hold for $a<0$.



You can find more related info in @mrf's answer here: Is $(-1)^{ab} = (-1)^{ba}$ true? => $(-1)^{ab} = ((-1)^a )^b$ is true?.


sequences and series - What must be the simplest proof of the sum of first $n$ natural numbers?




I was studying sequence and series and used the formula many times $$1+2+3+\cdots +n=\frac{n(n+1)}{2}$$ I want its proof.



Thanks for any help.


Answer



Let the sum be $$S_n=1+2+3+\cdots +n\tag1$$ on reversing the same equation we get $$S_n=n+(n-1)+(n-2)+\cdots +1\tag2$$ On adding $(1)$ and $(2)$ we have each term equal to $n+1$ which will occur $n$ times i.e. $$2S_n=(n+1)+(n+1)+(n+1)\cdots\{n times\}+(n+1)$$ $$2S_n=n(n+1)$$ $\therefore$ $$S_n=\frac{n(n+1)}{2}.$$ Hope it helps!!!


Friday, March 15, 2019

probability - On average, how many friends would I need to have to have at least one friend's birthday every day?

I know that because of the birthday problem, even after 365 friends, you're going to have a lot of doubles and that there's also an infinitesimal chance that even with infinite friends that there's one day left out. But I was curious how many friends you'd need on average to have every day represented (this is ignoring leap day and assuming birthdays are equally distributed). Or to generalize it further, given n unique boxes and you're placing balls in them with an equal 1/n chance for the ball to go into any box, how many balls would you have to place on average before every box had at least one ball?

elementary number theory - What is the largest power of 2 that divides $200!/100!$.





What is the largest power of 2 that divides $200!/100!$.



No use of calculator is allowed.
I had proceeded in a brute force method which i know regret..
I would like to know your methods.


Answer




Find highest power of $2$ in $200!$ and $100!$, using Legendre's formula



In $200!$, highest power of $2$



$$=\lfloor 200/2 \rfloor +\lfloor 200/4 \rfloor +\lfloor 200/8 \rfloor +\lfloor 200/16 \rfloor +\lfloor 200/32 \rfloor +\lfloor 200/64 \rfloor +\lfloor 200/128 \rfloor $$



$$=100+50+25+12+6+3+1=197$$



In $100!$, highest power of $2$




$$=\lfloor 100/2 \rfloor +\lfloor 100/4 \rfloor +\lfloor 100/8 \rfloor +\lfloor 100/16 \rfloor +\lfloor 100/32 \rfloor +\lfloor 100/64 \rfloor$$



$$= 50 + 25+12+6+3+1 =97$$



Now, just subtract the two, and we get $100$ as the answer.


combinatorics - Where does the following formula come from?




Where does the following formula come from?



For a Laurent polynomial $f(z)=\sum a_j z^j$ and a positive integer $n$ we have $$\sum_{k\equiv \alpha\pmod n} a_k=\frac1n\sum_{\omega:\omega^n=1} \omega^{-\alpha}f(\omega).$$



I hope someone can answer this question or give some refferences about it ! Thanks a lot!


Answer



first we shall see that:



$\frac1n\sum_{\omega:\omega^n=1} \omega^{-\alpha}f(\omega) = \frac1n\sum_{\omega:\omega^n=1}\omega^{-\alpha }\sum_{j}a_j\omega^j = \frac1n\sum_{\omega:\omega^n=1}\sum_{j}\omega^{j-\alpha }a_j =$




$ = \frac1n\sum_{j}\sum_{\omega:\omega^n=1}\omega^{j-\alpha }a_j$



I'm leaving for you to think why we can preform the last move (change the order).
also put notice that $\omega^{j-\alpha} = 1 \iff j \equiv \alpha \ (mod \ n)$ and also because there are n roots of unity in $\mathbb{C}$ we get:



$ \frac1n\sum_{j}\sum_{\omega:\omega^n=1}\omega^{j-\alpha }a_j = \frac1n\sum_{j \equiv \alpha (mod \ n)}\sum_{\omega:\omega^n=1}a_j + \frac1n\sum_{j \neq \alpha (mod \ n)}\sum_{\omega:\omega^n=1}\omega^{j-\alpha }a_j = \sum_{j \equiv \alpha (mod \ n)} \frac{n}{n} a_j + \frac1n\sum_{j \neq \alpha (mod \ n)}a_j\sum_{\omega:\omega^n=1}\omega^{j-\alpha }$



but we know that $\forall \ 0

functional equations - On sort-of-linear functions



Background



A function $ f: \mathbb{R}^n \rightarrow \mathbb{R} \ $ is linear if it satisfies
$$ (1)\;\; f(x+y) = f(x) + f(y) \ , \ and $$
$$ (2)\;\; f(\alpha x) = \alpha f(x) $$
for all $ x,y \in \mathbb{R}^n $ and all $ \alpha \in \mathbb{R} $.




A function satisfying only (2) is not necessarily linear. For example* $ f: \mathbb{R}^2 \rightarrow \mathbb{R} \ $ defined by $ f(x) = |x| \ $ (where $ |x| \ $ is the $ L^2 $ norm) satisfies (2) but is not linear. However, a function satisfying (1) does satisfy a weaker version of (2), namely $$ (2b)\;\; f(ax)=af(x) $$ for all $ a \in \mathbb{Q} $.



*Edit: As pointed out in the comments this example doesn't quite work since |ax|=|a||x|.



When $ f $ is continuous it's relatively straight-forward to show that under the extra hypothesis that $ f $ is continuous, (2b) implies (2). I want to say that continuity is a necessary condition for (1) to imply (2), or at least (worst) there is some extra hypothesis required (possibly weaker than continuity), but I'm not sure how to show it.



My question is therefore two-fold:




-Is continuity a necessary condition for (1) to imply (2) and how could I go about proving it.

-What are some examples (if there are any) of a function satisfying (1) but not (2)




This can be stated in a slightly more general context as follows:
Suppose $ V\ $ is a vector space over $ \mathbb{R}\ $ and $ f: V \rightarrow \mathbb{R}\ $ satisfies
$$ (1') \;\; f(x+y) = f(x)+f(y) $$ for all $ x,y \in V $.




Under what conditions is $ f\ $ a vector space homomorphism?








The reason I believe continuity is necessary is because of the similarity to the fact that $ x^{\alpha} x^{\beta} = x^{\alpha + \beta} $ for all $ \alpha,\beta \in \mathbb{R} $. Irrational powers can be defined either via continuity (i.e. if $ \alpha \ $ is irrational, then $ x^{\alpha}:= \lim_{q\rightarrow \alpha} x^q \ $ where q takes on only rational values) or by using the exponential and natual log functions, and either way proving the desired identity boils down to continuity.



I have come up with one example that satisfies (something similar to) (1) and not (2), but it doesn't quite fit the bill:
$ \ $ Define $ \phi : \mathbb{Q}(\sqrt{2}) \rightarrow \mathbb{Q} \ $ defined by $ \phi(a+b\sqrt{2}) = a+b $. Then $ \phi(x+y) = \phi(x)+\phi(y) \ $ but if $ \alpha=c+d\sqrt{2} \ $ then $ \phi(\alpha(a+b\sqrt{2})) = ac+2bd + ad+bc \neq \alpha \ \phi(a+b\sqrt{2}) $.
$ \ $ The problem is that even though $ \mathbb{Q}(\sqrt{2}) \ $ is a vector space over $ \mathbb{Q} $, the $ \alpha \ $ is coming from $ \mathbb{Q}(\sqrt{2}) \ $ instead of the base field $ \mathbb{Q} $.


Answer



It is not true that $|ax|=a|x|$; the correct identity is $|ax|=|a||x|$.




Whether or not adding the hypothesis of continuity is necessary for additive functions to be linear depends on the axiom of choice. Using a Hamel basis $B$ for $\mathbb{R}^n$ over $\mathbb{Q}$ together with one of its countable subsets $A=\{x_1,x_2,\ldots\}$, you can construct a discontinuous $\mathbb{Q}$ linear map from $\mathbb{R}^n$ to $\mathbb{R}$ by taking the unique $\mathbb{Q}$ linear extension of the function $f:B\to\mathbb{R}$ such that $f(x_k)=k|x_k|$ and $f(B\setminus A)=\{0\}$. Since $\mathbb{R}$ linear maps between finite dimensional real vector spaces are continuous, such a map cannot be linear. However, it is consistent with ZF that all additive functions from $\mathbb{R}^n$ to $\mathbb{R}$ are continuous (I am however not knowledgeable in the set theoretic background needed to show this).


Thursday, March 14, 2019

elementary number theory - Is there a way to figure out the $n$ such that $n!$ ends with exactly $k$ zeroes?

Given $n$, I can find the number of zeroes at the end of the decimal representation of $n!$ by $$ \sum_{i=1}^\infty\left\lfloor\frac{n}{5^i}\right\rfloor. $$


Is there a way to reverse this? That is, given $k$, is there a way to find out how many $n$ exist such that $n!$ has exactly $k$ zeroes at the end of its decimal representation besides making educated guesses and checking them?

abstract algebra - The square roots of different primes are linearly independent over the field of rationals


I need to find a way of proving that the square roots of a finite set
of different primes are linearly independent over the field of
rationals.




I've tried to solve the problem using elementary algebra
and also using the theory of field extensions, without success. To
prove linear independence of two primes is easy but then my problems
arise. I would be very thankful for an answer to this question.

improper integrals - Evaluating $int_{-infty}^{infty}e^{-x^2}dx$ using polar coordinates.




I need to express the following improper integral as a double integral of $x$ and $y$ and then, using polar coordinates, evaluate it.



$$I=\int_{-\infty}^{\infty}e^{-x^2}dx$$



Plotting it, we find a Gaussian centered at $x=0$ which tends to infinity to both sides. We can easily express it as a double integral :



$$I=\int_{0}^{1}\int_{-\infty}^{\infty}e^{-x^2}dxdy$$



Evaluating both using Wolfram Alpha gives $\sqrt{\pi}$, so it converges.




I know that $x=rcos(\theta)$ and that $dxdy=rdrd\theta$, but substituing this in the above integral and evaluating $\theta$ from $0$ to $2\pi$ and $r$ from $0$ to $\infty$ doesn't yield the correct answer. What's wrong here?



Thanks a lot !


Answer



You could try:
$$I^2=\left ( \int_{-\infty}^{\infty}e^{-x^2}\mathrm{d}x \right ) \left( \int_{-\infty}^{\infty}e^{-y^2}\mathrm{d}y \right) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2+y^2)}\mathrm{d}x\ \mathrm{d}y$$
then use the polar coordinates to compute the double integral.


Basic Probability Die Roll Game


I am not sure if I got the correct answers to these basic probability questions.



You and I play a die rolling game, with a fair die. The die is equally likely to land on any of its $6$ faces. We take turns rolling the die, as follows.


At each round, the player rolling the die wins (and the game stops) if (s)he rolls either "$3$", "$4$","$5$", or "$6$". Otherwise, the next player has a chance to roll.


  1. Suppose I roll 1st. What is the probability that I win in the 1st round?


  2. Suppose I roll in the 1st round, and we play the game for at most three rounds. What is the probability that
    a. I win?
    b. You win?
    c. No one wins? (i.e., I roll "$1$" or "$2$" in the 3rd round?)


My answers:


  1. $4/6$

  2. a. I think it is $(4/6)^2$ . Can someone explain why it isn't $4/6 + 4/6$ ?
    b. I think this is $4/6$.
    c. I think it is $(2/6)^3$

Answer




What is the probability that the player who rolls first wins in the first round?



Your answer of $4/6 = 2/3$ is correct.




What is the probability that the player who rolls first wins the game if the game lasts at most three rounds?



The player who rolls first if he or she wins during the first round, second round, or third round. We know that the probability that the player who rolls first has probability $2/3$ of winning in the first round.


For the player who rolls first to win in the second round, both players must roll a 1 or 2 in the first round, then the first player must roll a 3, 4, 5, or 6 in the second round. The probability that this event occurs is $$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{27}$$


For the first player to win in the third round, both players must roll a 1 or 2 for the first two rounds, then the first player must roll a 3, 4, 5, or 6 in the third round. The probability that this occurs is $$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{243}$$


Since these three events are mutually exclusive, the probability that the player who rolls first wins the game is $$\frac{2}{3} + \frac{2}{27} + \frac{2}{243} = \frac{162}{243} + \frac{18}{243} + \frac{2}{243} = \frac{182}{243}$$


The probability that the first player wins cannot be $$\frac{4}{6} + \frac{4}{6} = \frac{8}{6} = \frac{4}{3} > 1$$ since it is impossible for a probability to exceed $1$.



What is the probability that the player who rolls second wins the game if the game lasts at most three rounds.




The player who rolls second can win in the first round if the first player rolls a 1 or 2, then the second player rolls a 3, 4, 5, or 6. The probability that this event occurs is $$\frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{9}$$ For the second player to win in the second round, the first and second players must both roll a 1 or 2 in the first round, the first player must roll a 1 or 2 in the second round, then the second player must roll a 3, 4, 5, or 6 in the second round. The probability that this event occurs is $$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{81}$$ For the second player to win in the third round, both players must roll a 1 or 2 in the first two rounds, the first player must roll a 1 or 2 in the third round, then the second player must roll a 3, 4, 5, or 6 in the third round. The probability that this event occurs is $$\frac{2}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \cdot \frac{2}{6} \cdot \frac{2}{6} \cdot \frac{4}{6} = \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{2}{3} = \frac{2}{729}$$ Since these three events are mutually exclusive, the probability that the second player wins is $$\frac{2}{9} + \frac{2}{81} + \frac{2}{729} = \frac{162}{729} + \frac{18}{729} + \frac{2}{729} = \frac{182}{729}$$



What is the probability that neither player wins a game that lasts at most three rounds?



For this event to occur, both players must roll a 1 or 2 in all three rounds. The probability that this event occurs is $$\left(\frac{2}{6}\right)^6 = \left(\frac{1}{3}\right)^6 = \frac{1}{729}$$


Check: There are three possibilities: The first player wins, the second player wins, or neither player wins. Therefore, the probabilities of these three events should add up to $1$. Observe that $$\frac{182}{243} + \frac{182}{729} + \frac{1}{729} = \frac{546}{729} + \frac{182}{729} + \frac{1}{729} = 1$$


Wednesday, March 13, 2019

elementary number theory - How to find the inverse modulo m?


For example: $$7x \equiv 1 \pmod{31} $$ In this example, the modular inverse of $7$ with respect to $31$ is $9$. How can we find out that $9$? What are the steps that I need to do?


Update
If I have a general modulo equation:
$$5x + 1 \equiv 2 \pmod{6}$$


What is the fastest way to solve it? My initial thought was: $$5x + 1 \equiv 2 \pmod{6}$$ $$\Leftrightarrow 5x + 1 - 1\equiv 2 - 1 \pmod{6}$$ $$\Leftrightarrow 5x \equiv 1 \pmod{6}$$


Then solve for the inverse of $5$ modulo 6. Is it a right approach?


Thanks,


Answer




  1. One method is simply the Euclidean algorithm: \begin{align*} 31 &= 4(7) + 3\\\ 7 &= 2(3) + 1. \end{align*} So $ 1 = 7 - 2(3) = 7 - 2(31 - 4(7)) = 9(7) - 2(31)$. Viewing the equation $1 = 9(7) -2(31)$ modulo $31$ gives $ 1 \equiv 9(7)\pmod{31}$, so the multiplicative inverse of $7$ modulo $31$ is $9$. This works in any situation where you want to find the multiplicative inverse of $a$ modulo $m$, provided of course that such a thing exists (i.e., $\gcd(a,m) = 1$). The Euclidean Algorithm gives you a constructive way of finding $r$ and $s$ such that $ar+ms = \gcd(a,m)$, but if you manage to find $r$ and $s$ some other way, that will do it too. As soon as you have $ar+ms=1$, that means that $r$ is the modular inverse of $a$ modulo $m$, since the equation immediately yields $ar\equiv 1 \pmod{m}$.





  2. Another method is to play with fractions Gauss's method: $$\frac{1}{7} = \frac{1\times 5}{7\times 5} = \frac{5}{35} = \frac{5}{4} = \frac{5\times 8}{4\times 8} = \frac{40}{32} = \frac{9}{1}.$$ Here, you reduce modulo $31$ where appropriate, and the only thing to be careful of is that you should only multiply and divide by things relatively prime to the modulus. Here, since $31$ is prime, this is easy. At each step, I just multiplied by the smallest number that would yield a reduction; so first I multiplied by $5$ because that's the smallest multiple of $7$ that is larger than $32$, and later I multiplied by $8$ because it was the smallest multiple of $4$ that is larger than $32$. Added: As Bill notes, the method may fail for composite moduli.



Both of the above methods work for general modulus, not just for a prime modulus (though Method 2 may fail in that situation); of course, you can only find multiplicative inverses if the number is relatively prime to the modulus.


Update. Yes, your method for general linear congruences is the standard one. We have a very straightforward method for solving congruences of the form $$ax \equiv b\pmod{m},$$ namely, it has solutions if and only if $\gcd(a,m)|b$, in which case it has exactly $\gcd(a,m)$ solutions modulo $m$.


To solve such equations, you first consider the case with $\gcd(a,m)=1$, in which case $ax\equiv b\pmod{m}$ is solved either by finding the multiplicative inverse of $a$ modulo $m$, or as I did in method $2$ above looking at $\frac{b}{a}$.


Once you know how to solve them in the case where $\gcd(a,m)=1$, you can take the general case of $\gcd(a,m) = d$, and from $$ax\equiv b\pmod{m}$$ go to $$\frac{a}{d}x \equiv \frac{b}{d}\pmod{\frac{m}{d}},$$ to get the unique solution $\mathbf{x}_0$. Once you have that unique solution, you get all solutions to the original congruence by considering $$\mathbf{x}_0,\quad \mathbf{x}_0 + \frac{m}{d},\quad \mathbf{x}_0 + \frac{2m}{d},\quad\ldots, \mathbf{x}_0 + \frac{(d-1)m}{d}.$$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...