Sunday, September 30, 2018

functional equations - show that $f(x)=clog x $ for some $c$

Let, $f$: $\mathbb{R^+}$$\rightarrow$$\mathbb{R}$ be a continuous function satisfying $f(xy)=f(x)+f(y)$. Prove that, $f(x)=c\log x$ for some $c>0$.

Calculate limit with summation index in formula










I want to calculate the following:




$$ \lim_{n \rightarrow \infty} \left( e^{-n} \sum_{i = 0}^{n} \frac{n^i}{i!} \right) $$



Numerical calculations show it has a value close to 0.5. But I am not able to derive this analytically. My problem is that I am lacking a methodology of handling the $n$ both as a summation limit and a variable in the equation.


Answer



I don't want to put this down as my own solution, since I have already seen it solved on MSE.



One way is to use the sum of Poisson RVs with parameter 1, so that $S_n=\sum_{k=1}^{n}X_k, \ S_n \sim Poisson(n)$ and then apply Central Limit Theorem to obtain $\Phi(0)=\frac{1}{2}$.



The other solution is purely analytic and is detailed in the paper by Laszlo and Voros(1999) called 'On the Limit of a Sequence'.


Convergence in measure and metric notion



Any help with this problem is appreciated.



Given a measurable set of finite measure $D$, define $L_0(D)$ to be the vector space of real valued measurable functions on $D$. Define $d(f,g) := \int_D \min\{|f-g|,1\}$. $d(.,.)$ is a metric can be proven. I wanted to know how to prove the following statements, $$(f_n) \text{ is Cauchy in measure } \Leftrightarrow (f_n) \text{ is Cauchy in } L_0(D,d) \text { and }$$
$$ d(f_n,f) \rightarrow 0 \Leftrightarrow f_n \rightarrow f \text{ in measure}$$



I was able to show one side of $ d(f_n,f) \rightarrow 0 \Leftrightarrow f_n \rightarrow f$ in measure. The proof for that is as follows: For $\epsilon \in (0,1)$




$$(\Leftarrow) \, d(f_n,f) \leq m(D \cap \{\vert f_n - f \vert \geq \epsilon \}) + \epsilon m(D) \rightarrow \epsilon (1+m(D)) \text{ as n goes to } \infty $$ (Is this enough for $(\Leftarrow)$? and how to proceed with the rest?)


Answer



It's enough for $\Leftarrow$ since we get that for each $\varepsilon>0$, $\limsup_{n\to +\infty}d(f_n,f)\leqslant \varepsilon(1+m(D))$.



For the other direction, assume that $d(f_n,f)\to 0$ and fix $1>\varepsilon>0$. Then
$$\int_D\min\{1,|f_n(x)-f(x)|\}dm\geqslant \int_{\{|f_n(x)-f(x)|>\varepsilon\}}\min\{1,|f_n(x)-f(x)|\}dm\\\geqslant \varepsilon m\{|f_n(x)-f(x)|>\varepsilon\}$$


Saturday, September 29, 2018

probability - Rolling a die, expected number of coins won



So this is a problem I'm stuck on,



You roll a fair 4-sided die and with probability 1/3 you get to roll once more, and with probability 2/3 you have to stop.
Assume that you get as many coins as the sum of the rolls.
What is the probability you will win an even number of coins?



Can somebody help?




PS: You can get as many extra rolls as possible!


Answer



With a coin, at a probability of 1/4 you flip it and at a probability of 2/4 you stop. With a four-sided die, at a probability of 1/3 you get one more roll and at a probability of 2/3 you stop. If you multiply those together, you get 2/4*2/3=4/12=1/3, which is the probability of where you roll a four-sided die one more time. Also, 2/4 is equal to 1/2, which is the probability of flipping a coin.


find the square root of the complex number 1 + 2i?


Find the square root of the complex number $1 + 2i$




I was trying this question many times, but I could not get it and I don't know where to start.




Actually after squaring the $1+ 2i$ I got $-1 + 2i$, and I also tried multiplying by $1+i$. However I don't know the exact answer.



Thanks in advance

Friday, September 28, 2018

integration - How is it that treating Leibniz notation as a fraction is fundamentally incorrect but at the same time useful?

I have long struggled with the idea of Leibniz notation and the way it is used, especially in integration.



These threads discuss why treating Leibniz notation as a fraction and cancelling differentials is incorrect, but also go on to say that the notation is suggestive and we use it because it simplifies things:



What is the practical difference between a differential and a derivative?



If dy/dt * dt doesn't cancel, then what do you call it?




In them they say to treat the differential at the end of an integration expression as a "right parenthesis". This throws me off a bit because we can so easily do something like:



$\int cos(3x) \, dx$



$u=3x$



$du=3dx$



$\frac{1}{3}du=dx$




and then proceed to integrate:



$\frac{1}{3}\int cos(u) \, du$



and arrive at the correct answer with "incorrect" notation. I am supposed to treat the differential as a parenthesis but using this notation the differential seems to have a value.



How does this incorrect notation do such a good job ensuring that we do not disobey the "reverse chain rule" and ensures that our integrand is in the form $f'(g(x))\,g'(x)$ ?



People often say that it is very suggestive and I am wondering how. Excuse the LaTeX if it looks weird. This is my first time using it.

elementary number theory - Prove that, $(2cdot 4 cdot 6 cdot ... cdot 4000)-(1cdot 3 cdot 5 cdot ...cdot 3999)$ is a multiple of $2001$



Prove that the difference between the product of the first 2000 even numbers and the first $2000$ odd numbers is a multiple of $2001$. Please show the method.



I have started with the following process:



$$(2\cdot 4 \cdot 6 \cdots 4000)-(1\cdot 3 \cdot 5 \cdots 3999)$$




How we can proceed it to find that it is a multiple of $2001$?


Answer



Try proving that it is equal to $2001k_1 - 2001k_2$.



The product of odds has $2001$ as its factor, hence it can be written as $2001k_2$. Now the product of evens has $667$ and $3$ as its factors and thus making $2001$ as its factor.



So $$2\cdot4\cdot6\cdot...\cdot4000 - 1\cdot3\cdot5\cdot...\cdot3999 = 2001k_1 - 2001k_2 = 2001(k_1-k_2)$$


linear algebra - If $AB = I$ then $BA = I$



If $A$ and $B$ are square matrices such that $AB = I$, where $I$ is the identity matrix, show that $BA = I$.



I do not understand anything more than the following.


  1. Elementary row operations.

  2. Linear dependence.

  3. Row reduced forms and their relations with the original matrix.

If the entries of the matrix are not from a mathematical structure which supports commutativity, what can we say about this problem?



P.S.: Please avoid using the transpose and/or inverse of a matrix.


Answer



Dilawar says in 2. that he knows linear dependence! So I will give a proof, similar to that of TheMachineCharmer, which uses linear independence.


Suppose each matrix is $n$ by $n$. We consider our matrices to all be acting on some $n$-dimensional vector space with a chosen basis (hence isomorphism between linear transformations and $n$ by $n$ matrices).


Then $AB$ has range equal to the full space, since $AB=I$. Thus the range of $B$ must also have dimension $n$. For if it did not, then a set of $n-1$ vectors would span the range of $B$, so the range of $AB$, which is the image under $A$ of the range of $B$, would also be spanned by a set of $n-1$ vectors, hence would have dimension less than $n$.


Now note that $B=BI=B(AB)=(BA)B$. By the distributive law, $(I-BA)B=0$. Thus, since $B$ has full range, the matrix $I-BA$ gives $0$ on all vectors. But this means that it must be the $0$ matrix, so $I=BA$.


Thursday, September 27, 2018

real analysis - The Second Derivative Test and the Mean Value Theorem



I have been studying for the upcoming Introductory Real Analysis exam and got stuck upon the proof of the second derivative test. Here is a verbatim text of the theorem and the proof from the book:



THEOREM:
Let $I$ be an open interval containing the point $x_0$ and suppose that the function $f:I\rightarrow R$ has a second derivative. Suppose that $ f'(x_0)=0$. If $f''(x_0)>0$, then $x_0$ is a local minimizer of $f:I \rightarrow R$.



PROOF:
Since

$f''(x_0)=\lim_{x\to{x_0}} \frac{f'(x)-f'(x_0)}{x-x_0}>0$, it follows (see Exercise 16) that there is a $\delta>0$ such that the open interval $(x_0-\delta, x_0+\delta)$ is contained in $I$ and $\frac{f'(x)-f'(x_0)}{x-x_0}>0$ if $0<|x-x_0|<\delta$. But $f'(x_0)=0$, so the (4.13) [preceding inequality] amounts to the assertion that if $|x-x_0|<\delta$, then $f'(x)>0 \text{ if } x>x_0$ and $f'(x)<0 \text{ if } x. Using these two inequalities and the [Lagrange] Mean Value theorem, it follows that $f(x)>f(x_0) \text{ if } 0<|x-x_0|<\delta$.



The textbook states the Lagrange Mean Value Theorem as follows:
Suppose the function $f:[a,b] \rightarrow R$ is continuous and $f: (a,b) \rightarrow R$ is differentiable. Then there is a point $x_0$ in the open interval $(a,b)$ at which $f'(x_0)=\frac{f(b)-f(a)}{b-a}$.



Luckily I was able to solve 'Exercise 16,' but I just don't see how I can apply the Lagrange Mean Value Theorem in the latter parts of the proof. Any small hints would be appreciated. Thanks in advanced.


Answer



First, apply your MVT to the interval $I=(x_0,x_0+\delta)$. Take an $x\in I$. The MVT says that there is a $c, x_0 such that:
$f'(c)=\frac{f(x)-f(x_0)}{x-x_0} $.




Since $f'(c) >0$ (it's one of your inequalities), then $f(x)>f(x_0)$. A similar argument holds for the interval $I'=(x_0-\delta,x_0)$.


limits - Summation of exponential series





Evaluate the limit:
$$
\lim_{n \to \infty}e^{-n}\sum_{k = 0}^n \frac{n^k}{k!}
$$
It is not as easy as it seems and the answer is definitely not 1.
Please help in solving it.



Answer



Given an event whose frequencies open the Poisson distribution and occurs an average of $n$ times per trial, the probability that it occurs $k$ times in a given trial is



$e^{-n} \frac{n^k}{k!}$.



So, the sum in the limit is the probability that the event (which now must have an integer average) occurs no more than the mean number of times. For large $n$, the Poisson distribution is well-approximated by the normal distribution (this can be made into a precise limiting statement). The normal distribution is symmetric about its mean, so the limit of the sum is the probability that a normally distributed random variable is less than the mean of the variable, namely $\frac{1}{2}$.


Additive functional equation



Find all function $f: \mathbb{R} \rightarrow \mathbb{R}$ satisfying
$$ f(x+y) = f(x) + f(y)$$ and $$ f(f(x)) = x$$
for all $x, y \in \mathbb{R}$




This is one problem involving additive functional equation, but I don't know how to deal with the case $x$ is an irrational number. I appreciate all help and ideas. Thank you.



P.S: Or at least from the given solution it would be nice if you can infer one of the following statements:




  1. $f(x)$ is continuous on $\mathbb{R}$


  2. $f(x)$ is continuous at one point


  3. $f(x)$ is monotonic on $\mathbb{R}$


  4. $f(x)$ is bounded (on any interval)




Answer



It is straightforward to show that Cauchy's functional equation implies $f(qx)=q f(x)$ for all $q\in\mathbb{Q}, x\in\mathbb{R}$. Thus we can see $f$ as a $\mathbb{Q}$-linear map of the $\mathbb{Q}$-vector space $\mathbb{R}$. Like every linear map, it is determined by its values on a basis.



Let us choose a $\mathbb{Q}$-basis $B\subset\mathbb{R}$ of $\mathbb{R}$. Note that this requires the axiom of choice. That is, for every $x\in\mathbb{R}$ we can choose a coefficient function $x^*:B\rightarrow \mathbb{Q}$ such that $q(b)\not=0$ only for finitely many $b\in B$ and



$$x=\sum_{b\in B} x^*(b) b$$



Since $f$ is a linear map, it can be represented by an (infinite) $B\times B$ matrix of rational coefficients $(F_{b,b^\prime})_{b,b^\prime\in B}$ (with only finitely many non-zero terms in every column) such that



$$f(x)= F\cdot x$$




where $\cdot$ denotes multiplication of the matrix $F$ with the $\mathbb{Q}$-vector $x$, i.e.



$$f(x)^*(b) = \sum_{b^\prime\in B} F_{b,b^\prime} x^*(b^\prime)$$



$F_{b,b^\prime}$ is simply the coefficient of $b^\prime$ in the expansion of $f(b)$.



These are all solutions to Cauchy's functional equation by itself.
The condition $f(f(x))=x$ now reads




$$F^2=I$$



with $I$ being the identity matrix. That is,



$$\sum_{b^{\prime\prime}\in B} F_{b,b^{\prime\prime}} F_{b^{\prime\prime},b^\prime}=\left\{\begin{array}{ll}1 & \text{if}\;b=b^\prime,\\
0 & \text{if}\;b\not=b^\prime.\end{array}\right.$$



This characterizes all the solutions to the simultanous functional equations. The two solutions corresponding to the continuous solutions are just the cases $F=\pm I$. None of the other solutions satisfy any of your conditions $1.$ through $4.$ (since they all imply $f(x)=\pm x$).


combinatorics - Combinatorial argument for the identity $kbinom{n}{k} = nbinom{n-1}{k-1}$


I am looking for the combinatorial argument for the identity:


\begin{equation} k\binom{n}{k} = n\binom{n-1}{k-1} \end{equation}


This is easy to show algebraically as:



\begin{equation} \binom{n}{k} = \dfrac{n(n-1)(n-2)(n-k+1)}{k(k-1)!} \end{equation}


  1. What is the combinatorial argument?

  2. What are some general ideas to get started?

Here is a clarification of 2. From what I have seen so far, proving (combinatorially) an identity with an addition sign usually implies that we need to partition a set (this makes sense because of the addition rule and provides a nice visual). On the contrary, the previous observation leads me to believe that multiplication in identities can be resolved with the multiplication principle, but what is the "visual/interpretation" for this? Could someone provide such an interpretation for the example identity given above?


Answer



The number of ways of choosing a committee of $k$ members from a group of $n$ people, then selecting one of the $k$ members to be chair, is equal to the number of ways of choosing the chair from out of the whole group of $n$ first, then selecting the $k-1$ other members from among the $n-1$ other people.


Wednesday, September 26, 2018

exponential function - Why is $0^0$ undefined?






I'm wondering why $0^0$ is considered undefined. Why isn't 1 considered a valid solution?


Considering $0^0 = 1$ seems reasonable to me for two reasons:



  1. $\lim_{x \rightarrow 0} x^x = 1$




  2. $a^x$ would be a continuous function




Could you please explain why 1 can't be a solution and maybe provide some examples that show why having $0^0$ undefined is useful?


Answer



0Because as a function $f(x,y): R^2 \rightarrow R = x^y$ we have two different values moving toward $f(0,0) = 0^0$. In other words, $f(0^+,0) = 1$ and $f(0,0^+) = 0$.


But beware that there are some places in mathematics which by convention accept one of these values. For example in some parts of combinatorics we have $0^0 = 1$ to ease the definition of some functions.


complex numbers - Euler's formula and $i^x = cos(x cdot frac{pi}{2})$



While playing around with a plotting software, i just found out that



$$f(x) = i^x = \cos(x·\frac{\pi}{2})$$





  1. How does this connect to Euler's formula?

  2. Obviously, here, the alternating sign change is responsible for periodicity and form of the cosine. Is this also true for Euler's formula?



Please don't beat me, i'm an engineering student.


Answer



The quantity $i^x$ by itself is not well-defined. The way one would like to define it is $i^x = e^{x\log i}$, and then use the Taylor series for the exponential to compute $e^{x\log i}$. The problem with this is that $\log i$ is not well-defined: there are infinitely many possible values of $\log i$, namely $$\log i = \frac{\pi i}{2} + 2\pi in$$ for any $n\in \mathbb{Z}$. Thus to define $i^x$, you have to make a choice as to which one of these logarithms you are using. The standard choice would be $\log i = \pi i/2$. In this case, $$i^x = e^{x\log i} = e^{i\pi x/2} = \cos(\pi x/2) + i\sin(\pi x/2).$$ However, if you had chosen $\log i = \pi i/2 + 2\pi in$ for some $n\neq 0$, then $$i^x = e^{x(\pi i/2 + 2\pi in)} = \cos(\pi x/2 + 2\pi nx) + i\sin(\pi x/2 + 2\pi nx).$$


abstract algebra - Constructing finite fields of order $8$ and $27$ or any non-prime

I want to construct a field with $8$ elements and a field with $27$ elements for an ungraded exercise.






For $\bf 8$ elements: So we can't just have $\Bbb Z/8\Bbb Z$ since this is not even an integral domain. But rather we can construct $\Bbb F_2 \oplus \Bbb F_2 \oplus\Bbb F_2 \oplus \Bbb F_2 = \{0,1,\alpha,\alpha+1,\beta,\beta+1,\gamma,\gamma+1\}$.




This line of thinking seems to break from what I tried. Is there a better way to construct these things?



I saw this answer: Construct a finite field of order 27



We pick a polynomial irreducible polynomial and take the quotient of $\Bbb Z_3[x]$ but this wasn't helpful in me understanding the general ideal/method.

calculus - calculate $lim_{ntoinfty}int_{[0,infty)} exp(-x)sin(nx),mathrm{d}mathcal{L}^1(x)$



We've had the following Lebesgue-integral given:



$$\int_{[0,\infty)} \exp(-x)\sin(nx)\,\mathrm{d}\mathcal{L}^1(x)$$



How can you show the convergence for $n\rightarrow\infty$?




We've tried to use dominated convergence but $\lim_{n\rightarrow\infty} \sin(nx)$ doesn't exist.
Then we've considered the Riemann-integral and tried to show that
$$\int_0^\infty |\exp(-x)\sin(nx)| \,\mathrm dx
$$
exists but had no clue how to calculate it. So how can you show the existence of the Lebesgue-integral and calculate it?


Answer



$ |\exp(-x)\sin(nx)| \leq \exp(-x) $



Moreover, you can easily compute the integral for arbitrary $n$ by integrating by parts twice:




$$ \int_{[0,\infty)} \exp(-x)\sin(nx) = -\exp(-x)\sin(nx) |_{0}^{\infty} +n\int_{[0,\infty)} \exp(-x)\cos(nx) $$



$$ \int_{[0,\infty)} \exp(-x)\sin(nx) = n\int_{[0,\infty)} \exp(-x)\cos(nx) $$



$$ \int_{[0,\infty)} \exp(-x)\sin(nx) = n\exp(-x)\cos(nx) |_{0}^{\infty}-n^2\int_{[0,\infty)} \exp(-x)\sin(nx) $$



$$ (n^2 + 1) \int_{[0,\infty)} \exp(-x)\sin(nx) = n $$



So the integral equals $ \frac{n}{n^2 +1} $



sequences and series - The sides of a triangle are in Arithmetic progression




If the sides of a triangle are in Arithmetic progression and the greatest and smallest angles are $X$ and $Y$, then show that



$$4(1- \cos X)(1-\cos Y) = \cos X + \cos Y$$



I tried using sine rule but can't solve it.


Answer



Let the sides be $a-d,a,a+d$ (with $a>d)$ be the three sides of the triangle, so $X$ corresponds to the side with length $a-d$ and $Y$ that to with length $a+d$. Using cosine formula
\begin{align*}
\cos X & = \frac{(a+d)^2+a^2-(a-d)^2)}{2a(a+d)}=\frac{a+4d}{2(a+d)}\\
\cos Y & = \frac{(a-d)^2+a^2-(a+d)^2)}{2a(a-d)}=\frac{a-4d}{2(a-d)}\\

\end{align*}
Then
$$\cos X +\cos Y=\frac{a^2-4d^2}{a^2-d^2}=4 \frac{(a-2d)}{2(a+d)}\frac{(a+2d)}{2(a-d)}=4(1-\cos X)(1-\cos Y).$$


Tuesday, September 25, 2018

functions - Proof of $f^{-1}(B_{1}setminus B_{2}) = f^{-1}(B_{1})setminus f^{-1}(B_{2})$



I want to prove the following equation:




$$
f^{-1}(B_{1}\setminus B_{2}) = f^{-1}(B_{1})\setminus f^{-1}(B_{2})
$$



Is this a valid proof? I am not sure, because at one point I am looking at $f(x) \in B_1$, but then $x \in f^{-1}(B_1)$ could be actually some different points.



$$\begin{align*}
x \in f^{-1}(B_{1}\setminus B_{2}) &\iff f(x) \in B_{1}\setminus B_{2} \\
&\iff f(x) \in B_{1} \land f(x) \notin B_{2} \\
&\iff x \in f^{-1}(B_{1}) \land x \notin f^{-1}(B_{2}) \\

&\iff x \in f^{-1}(B_{1})\setminus f^{-1}(B_{2})
\end{align*}$$


Answer



Throughout your proof, $x$ is a given element of the domain of $f$; it doesn't matter that $f^{-1}(B_1)$, or $f^{-1}(B_2)$, or even $f^{-1}(f(x))$, could have more elements.



To help you see why your argument is correct, I'll rewrite your proof but with the domain of the function $f$, and with the "initialization" of the "variable" $x$, more explicit:




Theorem: Let $A$ and $B$ be sets, and consider a function $f:A\to B$. Let $B_1\subseteq B$ and $B_2\subseteq B$ be any subsets. Then we have
$$f^{-1}(B_1\setminus B_2)=f^{-1}(B_1)\setminus f^{-1}(B_2).$$

Proof: For any given $x\in A$, we have
$$\begin{align*}
x\in f^{-1}(B_1\setminus B_2) & \iff f(x)\in B_1\setminus B_2\\
&\iff f(x)\in B_1\land f(x)\notin B_2\\
&\iff x\in f^{-1}(B_1)\land x\notin f^{-1}(B_2)\\
&\iff x\in f^{-1}(B_1)\setminus f^{-1}(B_2)
\end{align*}$$
and therefore, since they comprise the same elements of $A$,
$$f^{-1}(B_1\setminus B_2)=f^{-1}(B_1)\setminus f^{-1}(B_2).$$




real analysis - Showing that $frac{sqrt[n]{n!}}{n}$ $rightarrow frac{1}{e}$




Show:$$\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}= \frac{1}{e}$$




So I can expand the numerator by geometric mean. Letting $C_{n}=\left(\ln(a_{1})+...+\ln(a_{n})\right)/n$. Let the numerator be called $a_{n}$ and the denominator be $b_{n}$ Is there a way to use this statement so that I could force the original sequence into the form of $1/\left(1+\frac{1}{n}\right)^n$


Answer



I would like to use the following lemma:




If $\lim_{n\to\infty}a_n=a$ and $a_n>0$ for all $n$, then we have
$$
\lim_{n\to\infty}\sqrt[n]{a_1a_2\cdots a_n}=a \tag{1}
$$





Let $a_n=(1+\frac{1}{n})^n$, then $a_n>0$ for all $n$ and $\lim_{n\to\infty}a_n=e$. Applying ($*$) we have
$$
\begin{align}
e&=\lim_{n\to\infty}\sqrt[n]{a_1a_2\cdots a_n}\\
&=\lim_{n\to\infty}\sqrt[n]{\left(\frac{2}{1}\right)^1\left(\frac{3}{2}\right)^2\cdots\left(\frac{n+1}{n}\right)^n}\\
&=\lim_{n\to\infty}\sqrt[n]{\frac{(n+1)^n}{n!}}\\&=
\lim_{n\to\infty}\frac{n+1}{\sqrt[n]{n!}}=\lim_{n\to\infty}\frac{n}{\sqrt[n]{n!}}+\lim_{n\to\infty}\frac{1}{\sqrt[n]{n!}}=\lim_{n\to\infty}\frac{n}{\sqrt[n]{n!}}
\end{align}\tag{2}
$$

where we use (1) in the last equality to show that
$
\lim_{n\to\infty}\frac{1}{\sqrt[n]{n!}}=0.
$



It follows from (2) that
$$
\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=\frac{1}{e}.
$$


sequences and series - Evaluate the Sum $S=frac{1}{4}+frac{1.3}{4.6}+frac{1.3.5}{4.6.8}+cdots infty$

Evaluate the Sum




$$S=\frac{1}{4}+\frac{1.3}{4.6}+\frac{1.3.5}{4.6.8}+\cdots \infty$$




My try: We have the $n$ th term as




$$T_n=\frac{1.3.5. \cdots (2n-1)}{4.6.8 \cdots (2n+2)}$$ $\implies$



$$T_n=\frac{1.3.5. \cdots (2n-1)}{2^n \times (n+1)!}$$



$$T_n=\frac{(2n)!}{4^n \times n! \times (n+1)!}$$



$$T_n=\frac{\binom{2n}{n-1}}{n \times 4^n}$$



Any clue here?

Monday, September 24, 2018

math history - A proof for this series?



The summation, $$\sum_{i=1}i^2=n(n+1)(2n+1)/6$$ However, how could you prove this? All of the proofs I've seen already assume knowledge of the formula, but how do you prove this without first knowing the formula? What about formulas for higher powers, such as cubics and quartics? If possible, please keep this on a level where I can understand (I'm in calculus AB). Thanks!


Answer




Polya gives a beautiful discussion of this problem in Chapter 3 (called "Recursion") in volume 1 of his book "Mathematical Discovery." One way to "discover" the formula is to exploit the identity $$(n+1)^3=n^3+3n^2+3n+1$$ which implies $$(n+1)^3-n^3=3n^2+3n+1$$
If you sum all such equations from $n=1$ to $n=k$ you find the left side telescopes, leaving $$(k+1)^3-1=3S+3(1+2+\cdots+k)+k$$
where $S$ is the sum you wanted. Of course, the sum in parentheses is just $\frac{k(k+1)}{2}$, and solving the equation for $S$ after making that substitution gives the result.


How to find closed form by induction


How can I find the closed form of


a) 1+3+5+...+(2n+1)
b) 1^2 + 2^2 + ... + n^2

using induction?



I'm new to this site, and I've thought about using the series 1 + 2 + 3 +...+ n = n(n+1)/2 to help me out But isn't that technically using prior knowledge and hence invalid? Am I on the right track? Thanks.


Answer



$a):$ Clearly it is an Arithmetic Series


So, the sum up to $n$th term is $=\frac n2\{2\cdot1+(n-1)2\}=n^2$


Let $P(n):\sum_{1\le r\le n}(2r-1)=n^2$


$P(1):\sum_{1\le r\le 1}(2r-1)=1$ which is $=1^2$ so $P(n)$ is true for $n=1$


Let $P(n)$ is true for $n=m$


So, $\sum_{1\le r\le m}(2r-1)=m^2$


$P(m+1): \sum_{1\le r\le m+1}(2r-1)=\sum_{1\le r\le m}(2r-1)+2m+1=m^2+2m+1=(m+1)^2$


So, $P(m+1)$ will be true if $P(m)$ is true.



But we have already shown $P(n)$ is true for $n=1$


So, by induction we can prove that $P(n)$ is true for all positive integer $n$


$b):$


HINT:


We know, $(r+1)^3-r^3=3r^2+3r+1$


Putting $r=1,2,\cdots,n-1,n$ and adding we get $(n+1)^3-1=3\sum_{1\le r\le n}r^2+3\sum_{1\le r\le n}r+\sum_{1\le r\le n}1$


Now, you know $\sum_{1\le r\le n}r=\frac{n(n+1)}2$ and $\sum_{1\le r\le n}1=n$


On simplification $\sum_{1\le r\le n}r^2=\frac{n(n+1)(2n+1)}6$


Can you use similar induction approach here?


sequences and series - Is $sum_{n=1}^{infty} 1 = -frac{3}{12}$ true?




Here is how I derived this...




$$1+2+3+4+...=-\frac{1}{12}$$
$$2+4+6+8+...=2(1+2+3+4+...)=2(-\frac{1}{12})=-\frac{1}{6}$$
Thus $1+3+5+7+...=-\frac{1}{6}-(1+1+1+...) $ because $S_{odd}=S_{even}-(1+1+1+...)$



Also $S_{odd}=(1+2+3+4+...)-(2+4+6+8+...)=-\frac{1}{12}+\frac{1}{6}=\frac{1}{12}$



Thus $\frac{1}{12}=-\frac{1}{6}-(1+1+1+...)$
$$1+1+1+...=-\frac{3}{12}$$




What I think my mistake was (if I have one) is where I assume the sum of all numbers is half the sum of all even numbers; although, it should work since there is nothing two times infinity.



Please leave simple solution (im only 15 years old) to why this is false: Wikipedia says that $\sum_{n=1}^{\infty} 1 = \infty$.



Edit: I know now that the above notion is false. I recently watched a Numberphile video proving $\sum_{n=1}^{\infty} n = -\frac{1}{12}$. I followed the same line of reasoning to derive the untrue $\sum_{n=1}^{\infty} 1 = -\frac{3}{12}$ I'll admit; I was ignorant in believing so and I apologize for wasting your time.



Thanks for the answer kindly explaining why I was wrong. I guess I should watch this video to un-brainwash me. Sorry again.


Answer



Take two infinite sets: $A=\{1, 2, 3, \cdots\}$ and $B=\{4, 5, 6, \cdots\}.$ Then $A-B = \{1,2,3\}$ and so is of cardinality 3. So infinity minus infinity must be 3. Now by choosing a different $B$ you can make infinity minus infinity any number you like.




In fact, choose $B=\{1, 3, 5, \cdots\}$ and then $A-B =\{2, 4, 6, \cdots\},$ and so $A-B$ has cardinality equal to infinity.



The point here is that when you start juggling infinities, you can make just about anything equal to just about anything. Which means almost everything is not well-defined. Once you had a divergent series in your hand, everything else was nonsense.


Sunday, September 23, 2018

calculus - Formula for $r+2r^2+3r^3+...+nr^n$




Is there a formula to get $r+2r^2+3r^3+\dots+nr^n$ provided that $|r|<1$? This seems like the geometric "sum" $r+r^2+\dots+r^n$ so I guess that we have to use some kind of trick to get it, but I cannot think of a single one. Can you please help me with this problem?


Answer



We have
$$1+r+r^2 + \cdots + r^n = \dfrac{r^{n+1}-1}{r-1}$$
Differentiating this once, we obtain
$$1+2r + 3r^2 + \cdots + nr^{n-1}= \dfrac{nr^{n+1}-(n+1)r^n + 1}{(r-1)^2}$$
Multiply the above by $r$ to obtain what you want.


sequences and series - Easy summation question: $S= 1-frac{1}{2}+frac{1}{4}-frac{1}{8}+frac{1}{16}cdots$

While during physics I encountered a sum I couldn't evaluate:
$$S= 1-\frac{1}{2}+\frac{1}{4}-\frac{1}{8}+\frac{1}{16}\cdots$$

Is there a particular formula for this sum and does it converges?

Trigonometry limit's proof: $lim_{xto0}frac{sin(x)+sin(2x)+cdots+sin(kx)}{x}=frac{k(k+1)}{2}$



How to prove that $$\lim_{x\to0}\frac{\sin(x)+\sin(2x)+\cdots+\sin(kx)}{x}=\frac{k(k+1)}{2}$$

I tried to split up the fraction and multiple-divide every new fraction with its $x$ factor but didn't work out.
ex: $$\lim_{x\to 0}\frac{\sin(2x)}{x} = \lim_{x\to 0}\frac{\sin(2x)\cdot 2}{2\cdot x}=2$$


Answer



You are so close. Note that
\begin{align*}
\frac{\sin(x)+\sin(2x)+\cdots+\sin(kx)}{x}
&= \frac{\sin(x)}{x}+\frac{\sin(2x)}{x}+\cdots+\frac{\sin(kx)}{x} \\
&= \frac{\sin(x)}{x}+2\frac{\sin(2x)}{2x}+\cdots+k\frac{\sin(kx)}{kx} \\
&\to 1 + 2 + \cdots + k \\
&= \frac{k(k+1)}{2}

\end{align*}
as $x\to0$.



I suspect you may not have been able to finish because you didn't recognize the identity
$$
1 + 2 + \cdots + k = \frac{k(k+1)}{2}.
$$
This identity has a very cute proof.
Set $S:=1+2+\cdots+k$. Adding
\begin{align*}

1 + 2 + \cdots + k &= S \\
k + (k-1) + \cdots + 1 &= S \\
\end{align*}
gives
\begin{align*}
\underbrace{(k+1)+(k+1)+\cdots+(k+1)}_{k\ \text{times}} = 2S. \\
\end{align*}
Therefore $k(k+1)=2S$ and consequently
$$1+2+\cdots+k = S = \frac{k(k+1)}{2}.$$


calculus - Functional equation $f(xy)=f(x)+f(y)$ and continuity




Prove that if $f:(0,\infty)→\mathbb{R}$ satisfying $f(xy)=f(x)+f(y)$, and if $f$ is continuous at $x=1$, then $f$ is continuous for $x>0$.



I let $x=1$ and I find that $f(x)=f(x)+f(1)$ which implies that $f(1)=0$. So, $\lim_{x\to1}f(x)=0$, but how can I use this to prove continuity of $f$ for every $x \in \mathbb R$?


Any help would appreciated. Thanks


Answer



Give $x_0>0$, $$f(x)-f(x_0)=f\left(x_0\cdot\frac{x}{x_0}\right)-f(x_0)=f\left(\frac{x}{x_0}\right),$$ by $f$ is continuous at $x=1$, when $x\to x_0$, $\frac{x}{x_0}\to1$, then $$\lim\limits_{x\to x_0}f(x)=f(x_0).$$


Saturday, September 22, 2018

combinatorics - Which of these is larger by combinatorial reasoning



Determine which of the following expression's is larger using a combinatorial reasoning $$\binom{n}{k}-\binom{n-3}{k}$$ and $$\binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-3}{k-1}$$I can solve this algebraicly however I need to determine which is larger using combinatorial reasoning which is something I'm not very good at, If someone could provide some advice on how to approach these kind of problems, it would be greatly appreciated. Thanks in advance :)


Answer



As usual let $[n]=\{1,2,\ldots,n\}$.



First, $\binom{n}k-\binom{n-3}k$ is the number of $3$-element subsets of $[n]$ that are not subsets of $[n-3]$, so it’s the number of $3$-element subsets of $[n]$ that contain at least one of the numbers $n,n-1$, and $n-2$. Note that these are precisely the $k$-element subsets of $[n]$ that have one of $n,n-1$, and $n-2$ as largest element.



$\binom{n-1}{k-1}$ is the number of $(k-1)$-element subsets of $[n-1]$, so it’s the number of $k$ element subsets of $[n]$ that have $n$ as largest element. $\binom{n-2}{k-1}$ is the number of $(k-1)$-element subsets of $[n-2]$, so it’s the number of $k$-element subsets of $[n]$ that have $n-1$ as largest element. And $\binom{n-3}{k-1}$ is the number of $(k-1)$-element subsets of $[n-3]$, so it’s the number of $k$-element subsets of $[n]$ that have $n-2$ as largest element. The sum of these three binomial coefficients is therefore the number of $k$-element subsets of $[n]$ that have one of $n,n-1$, and $n-2$ as largest element, and the two expressions are equal.



analysis - Prove that $f'$ exists for all $x$ in $R$ if $f(x+y)=f(x)f(y)$ and $f'(0)$ exists

A function $f$ is defined in $R$, and $f'(0)$ exist.
Let $f(x+y)=f(x)f(y)$ then prove that $f'$ exists for all $x$ in $R$.


I think I have to use two fact:
$f'(0)$ exists
$f(x+y)=f(x)f(y)$
How to combine these two things to prove that statement?

complex numbers - Euler's identity: why is the $e$ in $e^{ix}$? What if it were some other constant like $2^{ix}$?


$e^{ix}$ describes a unit circle in polar coordinates on the complex plane, where x is the angle (in radians) counterclockwise of the positive real axis.


My intuition behind this is that $\frac{d}{dx}e^{ix}=i\cdot e^{ix}$. Since multiplication by i is a 90-degree rotation, we could think of $e^{ix}$ as the position vector of a particle and $\frac{d}{dx}e^{ix} = i\cdot e^{ix} $ as its velocity (x could be time). The velocity is always perpendicular to the position vector, so we have circular motion.


Hopefully I've described this well, see also http://betterexplained.com/articles/intuitive-understanding-of-eulers-formula/ if you don't understand where I'm coming from.


What I don't understand is why you need the "e" in Euler's identity. What if it were some other constant: for example, 2? You wouldn't get a circle, but how can I visualize what it is that you would get?


For example, what would $2^{ix}$ look like on the complex plane? I note that $2^{ix} = e^{ix\cdot ln(2)}$, and we could substitute that into Euler's identity and get $e^{ix\cdot ln(2)}=cos(x\cdot ln2) + i\cdot sin(x\cdot ln2)$.


So my question really has two related parts:


1) Why do we take e (and not some other number) to the power of ix to get a circle?


2) What would it look like if we took some other number to the power of ix? $e^{ix}$ really gives us a constant-radius spiral in three dimensions (e.g. http://www.songho.ca/math/euler/euler.html), what would $2^{ix}$ look like in complex 3d space? How could I have figured that out?



Thank you for your help.


Answer



We in general define (ignoring $a=0$)


$$a^{ix}\equiv e^{i x\ln a}=e^{ixR -xM}$$


where we take the principal value of the logarithm and let $\ln a= R+iM$ be split into real and imaginary parts.


If your only aim is to have the locus being a circle then any $a$ such that $M=0$ will do - equivalently, you need $a>0$. The only difference from the $e$ case is that the speed at which you go round the circle is rescaled by $R$. (Your nice idea of differentiation and noticing orthogonality still works here.)


If you want the circle to be traversed at a speed such that $x$ is $2\pi$ periodic then you need $R=1\iff a=e$.


If you consider negative $a$ or general imaginary $a$ then you can see from the above formula that you get a circle multiplied by a new term $e^{-xM}$ which stretches it as it's drawn. This makes a (logarithmic) spiral.


In 3D space, the rescaled circles become helices which are more or less stretched out (like springs are compressed or elongated). The imaginary ones give various spirals stretched out across space. Graph them if you're interested by defining parametric equations $x=t,y=e^{-Mt}\cos (Rt),z=e^{-Mt}\sin (Rt)$.


Thursday, September 20, 2018

elementary set theory - Circular reasoning in the hint of proving that if $n$ is a natural number greater than $1$, then $n-1$ is natural number



Problem Show that if $n$ is a natural number greater than 1, then $n-1$ is a natural number.



In Advanced Calculus 2nd edition by Fitzpatrick the author defines the set of natural numbers as the intersection of all inductive sets.



For this problem, he provides the following hint: Show that the set $\{n \, : \, n = 1 \text{ or } n \text{ is a natural number and } n-1 \text{ is a natural number}\}$ is inductive.




Question: Using the hint, it was easy to construct a proof. However, I'm wondering if the hint is circular. If we want to show that $n-1$ is a natural number, why is it okay to define a set with the property that $n-1$ is a natural number? It seems that we are assuming our conclusion to be true in order to demonstrate that the conclusion is true.



Thanks.



Here is the definition of an inductive set: $S$ is an inductive set if




  1. $1 \in S$ and

  2. If $x \in S$, then $x+1 \in S$.



Answer



Well, then it is a rather clear and pretty strong hint: if you prove that set that is given there is inductive, then as the natural numbers has been defined to you as the intersection of all inductive sets, then the set of natural numbers is contained in that set. But...(end the argument).



Nothing circular here.


linear algebra - Norm equivalence and the sequence limit of normalized vectors



Consider finite-dimensional vector spaces, on which two norms $\|\|_1$ and $\|\|_2$ are always equivalent. Then a sequence of vectors ${\bf v}_k \to {\bf v}$ w.r.t. norm 1 iff ${\bf v}_k \to {\bf v}$ w.r.t. norm 2.



The question is about sequence $\frac{{{{\mathbf{v}}_k}}}{{{{\left\| {{{\mathbf{v}}_k}} \right\|}_1}}} \to \frac{{\mathbf{v}}}{{{{\left\| {\mathbf{v}} \right\|}_1}}}$ w.r.t. norm 1, will this limit imply $\frac{{{{\mathbf{v}}_k}}}{{{{\left\| {{{\mathbf{v}}_k}} \right\|}_2}}} \to \frac{{\mathbf{v}}}{{{{\left\| {\mathbf{v}} \right\|}_2}}}$ w.r.t. norm 2 (or maybe a weaker claim that one convergence implies the other but the limit might be different)? I have trouble showing this is true. If this is false, anyone can help provide a counterexample? Thanks!



For example, $(k,\sqrt k),k=1,2,...$ satisfies $\frac{{(k,\sqrt k )}}{{{{\left\| {(k,\sqrt k )} \right\|}_p}}} = \frac{{(k,\sqrt k )}}{{{{({k^p} + {k^{\frac{p}{2}}})}^{\frac{1}{p}}}}} \to (1,0)$ for any $p$-norm.


Answer




Supposedly $v\ne0$. Let $u_k=v_k/\|v_k\|_1$ and $u=v/\|v\|_1\ne0$. By assumption, $u_k\to u$ (with respect to both norms, because all norms are equivalent) and hence $\|u_k\|_2\to \|u\|_2$ (because the norm function is continuous). Therefore
$$
\left\|\frac{u_k}{\|u_k\|_2}-\frac{u}{\|u\|_2}\right\|_2
\le \frac{1}{\|u_k\|_2}\|u_k-u\|_2
+\left|\frac{1}{\|u_k\|_2}-\frac{1}{\|u\|_2}\right|\|u\|_2 \to0.
$$
Consequently, $v_k/\|v_k\|_2=u_k/\|u_k\|_2\to u/\|u\|_2=v/\|v\|_2$.


real analysis - Trying to show that $e-sum_{k=0}^{n}1/k!$ goes to $0$ faster than $n!$ goes to infinity as $nto infty$

This have been asked before but I think people misunderstood my question.



For a better notation:

$e-\sum_{k=0}^{n}1/k! = e - \sum^n$ .



Having the following inequality:



$0 < n!(e-\sum^n) <1/n \tag{1}$



we can apply the squeeze theorem to show that $n!(e-\sum^n)$ goes to zero as $n$ goes to infinity.



If $n!(e-\sum^n)$ goes to zero when $n \to \infty$ this means that $\sum^n$ converges so quickly to $e$ that $(e-\sum^n)$ goes to $0$ faster than $n!$ goes to infinity as $n \to \infty$.




Is possible to show this without relying on $(1)\:?$

calculus - Frullani integral for $f(x)=e^{x}$ in a complex context

I should prove the Frullani integral equality



$$
\int_{0}^{\infty} (1-e^{zx}) \frac{\beta}{x} e^{-\gamma x}dx = \beta \log \left(1- \frac{z}{\gamma}\right)
$$

for $z \in \mathbb{C}$ with non-positive real part.



I should first consider $z \leq 0$ and use
$$

\frac{e^{-\gamma x} - e^{-(\gamma - z)x}}{x}=\int_{\gamma}^{\gamma - z}e^{-y x}dy$$

and then change the order of integration. These steps are clear (see also Frullani integral for $f(x)=e^{-\lambda x }$).



But then I should use analytic extension to show that the formula is valid for $z \in \mathbb{C}$ with non-positive real part. I need help for this step.



Thank you in advance!

Wednesday, September 19, 2018

Integration of a function involving algebraic and trigonometric functions

Evaluate $$f(x) = \int_0^{\pi/2}\frac{1}{(1+x^2)(1+\tan{x})}dx$$



My attempt: I could not apply any standard method known to me to solve this integration. The only way I thought of is expressing $\tan(x)$ as an infinite series and expanding into a polynomial. But this will introduce approximation errors.
$$f(x) = \int_0^{\pi/2}\frac{1}{(1+x^2)(x + \frac{x^3}{3}+\frac{2x^5}{15}+...)}dx$$

$$or, f(x) = \int_0^{\pi/2}{(1+x^2)^{-1}(x + \frac{x^3}{3}+\frac{2x^5}{15}+...)^{-1}}dx$$
Please let me know how to solve this problem.

abstract algebra - The square roots of different primes are linearly independent over the field of rationals


I need to find a way of proving that the square roots of a finite set
of different primes are linearly independent over the field of
rationals.





I've tried to solve the problem using elementary algebra
and also using the theory of field extensions, without success. To
prove linear independence of two primes is easy but then my problems
arise. I would be very thankful for an answer to this question.

Tuesday, September 18, 2018

Binomial coefficient as a summation series proof?



Alright, so I was wondering if the following is a well known identity or if its existence provides any real benefits other than serving as a time-saver when dealing with higher values for combinations.



After screwing around with some basic combinations stuff, I noticed the following:



$$ \sum_{i=1}^{n-1} i = \begin{pmatrix}n\\2\\ \end{pmatrix}$$




To prove this, I used Gauss' method to simplify the summation, and I wrote n choose 2 in terms of factorials to simplify the right side.



$$ \frac{ (n-1) n } {2} = \frac{ n! } { (2!) (n-2)! } $$



$$ 2!(n-2)!(n-1)(n) = 2n! $$



$$ 2(n-2)!(n-1)(n) = 2n! $$



$$ (n-2)!(n-1)(n) = n! $$




$$ n! = n! $$



I did this on lunch break one day over the summer. I'm in high school, so my math skills are very subpar on this forum, but I was hoping some people might discuss it and/or answer my aforementioned questions. I didn't see anything about it on here or Google, for that matter. If you found this banal or rudimentary, just let me know and I'll refrain from posting until I come up with something more interesting. Regardless, I hope you found it worth your time.


Answer




I was wondering if the following is a well known identity




Not only is it well-known, but it's part of a much larger group. In general, we have





$$\sum_{k=0}^nk~(k+1)~\cdots~(k+p)~=~(p+1)!~{n+p+1\choose n-1}~=~(p+1)!~{n+p+1\choose p+2}$$




The whole idea is to rewrite the summand as $(p+1)!~\displaystyle{p+k\choose p+1}.~$ See also Faulhaber's formulas.


calculus - How was the differentiation formula derived for a composite trig function?




For example, derivative of this trig function is:



\begin{align}\frac{d}{dx} \left[\sec\left(\frac{x}{12}\right)\right] \
&= \sec\left(\frac{x}{12}\right) * \tan\left(\frac{x}{12}\right) * \frac{d}{dx}\left(\frac{x}{12}\right) \\
&= \sec\left(\frac{x}{12}\right) * \tan\left(\frac{x}{12}\right) * \frac{1}{12}
\end{align}



But I fail to see what differentiation formula is applied here.
It's not chain rule for sure.

If you do it by chain rule and assume it is a power of 1 then it'd look like this:



$$\frac{d}{dx} [sec(\frac{x}{12})]^1 = 1 * (sec(\frac{x}{12}))^0 * \frac{d}{dx} (sec(\frac{x}{12})) = 1 * 1 * sec(\frac{x}{12}) * tan(\frac{x}{12})$$



If chain rule is not applied to this function like this because the function is "composite" which is why it should be done as the first variant, then how was chain rule altered for this function in the first variant?


Answer



Actaully it is the chain rule. You are confusing the chain rule and the power rule. If $h(x)=f(g(x))$, the chain rule states:
$$h'(x)=f'(g(x))g'(x)$$.



The power rule is just a special case of the chain rule. Namely, when $f(x)=x^n$.




To use the chain rule for your example. Let $f(x)=\sec x$ and $g(x)=\frac x{12}$.



$$h(x)=f(g(x))=f\left(\frac x{12}\right)=\sec\frac x{12}$$



So,
$$h'(x)=f'(g(x))g'(x)=\sec\frac x{12}\tan\frac x{12}\cdot\frac 1{12}=\frac1{12}\sec\frac x{12}\tan\frac x{12}.$$


algebra precalculus - What does the notation in this claim, that needs to be proved, mean?



Prove, or disprove using a counter example:


Let $p,q\in\mathbb{R}$ \ $\mathbb{Q}$ \, then $p-q$ is irrational.


I don't understand what the back slash between the $\mathbb{R}$ and $\mathbb{Q}$ means.


Answer



The $\setminus$ sign means substraction of sets. $$A\setminus B = \{x\in A \mbox{ such that } x\not\in B\}.$$


In your case, $\mathbb{R}\setminus\mathbb{Q}$ is the set of irrational numbers.


calculus - Why is $f(x,y)$ said to be discontinuous at $(0,0)$?



Why is
$f(x,y)=\begin{cases}
\frac{x^2y}{x^4+y^2}, & \text{if $(x,y)\neq (0,0)$}\\[2ex]
0, & \text{if $(x,y)=(0,0)$}

\end{cases}$ said to be discontinuous at $(0,0)$?



I am supposed to show that this function is not continuous at (0,0), but as $(x,y)$ approaches $(0,0)$, $f(x,y)$ approaches $0=f(0,0)$. So what did I miss here?


Answer



Let $y=x^{2}$. Consider $f(x,x^{2})=\frac{x^{4}}{2x^{4}}=\frac{1}{2}$.So it's not continuous at $(0,0)$. (Even it does not have a limit, you can plug $y=0$)


probability - Intuitive/heuristic explanation of Polya's urn

Suppose we have an urn with one red ball and one blue ball. At each step, we take out a single ball from the urn and note its color; we then put that ball back into the urn, along with an additional ball of the same color.



This is Polya's urn, and one of the basic facts about it is the following: the number of red balls after $n$ draws is uniform over $\{1, \ldots, n+1\}$.



This is very surprising to me. While its not hard to show this by direct calculation, I wonder if anyone can give an intuitive/heuristic explanation why this distribution should be uniform.




There are quite a few questions on Polya's urn on math.stackexchange, but none of them seem to be asking exactly this. The closest is this question, where there are some nice explanations for why, assuming as above that we start with one red and one blue ball, the probability of drawing a red ball at the $k$'th step is $1/2$ for every $k$ (it follows by symmetry).

Monday, September 17, 2018

limits - Does n power of e grow much more faster than its Maclaurin polynomial?

I wonder how to calculate the following limit: $$ \lim_{n\rightarrow\infty}\frac{1+n+\frac{{}n^{2}}{2!}+\cdots +\frac{n^{n}}{n!}}{e^{n}} $$ In the first sight, I think it should be zero, because exponential function is much faster than polynomial. But the upper of the expression is the Maclaurin polynomial of $e^{n}$. With the growth of n, it approaches to $e^{n}$. Consider there is a roughly way to estimate the remainder of $e^{n}$ rather than $$ R_{n+1}(n)=\frac{\xi^{n+1}}{(n+1)!} \ for \ \ \xi\in(n,+\infty) $$ Because $$ \frac{1+n+\frac{{}n^{2}}{2!}+\cdots +\frac{n^{n}}{n!}}{e^{n}}=1-\frac{R_{n+1}(n)}{e^{n}} $$ But it's hard to continue.

soft question - Create a Huge Problem

I am wondering if any problems have been designed that test a wide range of mathematical skills. For example, I remember doing the integral $$\int \sqrt{\tan x}\;\mathrm{d}x$$ and being impressed at how many techniques (substitution, trig, partial fractions etc.) I had to use to solve it successfully.




I am looking for suggestions/contributions to help build up such a question. For example, one problem could have as its answer $\tan x$ which would then be used in the integral, and something about the answer to the integral could lead into the next part.



The relevant subjects would be anything covered in the first few years of an undergraduate degree in mathematics.



The main reason I ask is that I want to work on developing some more integrated ways to practice mathematics "holistically" which I feel is very lacking in the current educational model.

indeterminate forms; definition


$\lim_{x\to 0} \frac{x}{x}$ is an indeterminate form whereas $\lim_{x\to 0} \frac{[x^2]}{x^2}$ is not an interminate form (where $[x]$ represents the greatest integer function



Why is $\lim_{x\to 0} \frac{[x^2]}{x^2}$ not in indeterminate form?


As far as I know, $\frac{[x^2]}{x^2}$ gives a $\frac{0}{0}$ at $0$. What is the exact definition of 'indeterminate form'? Wikipedia does not help answer this question.

calculus - Why is $1^{infty}$ considered to be an indeterminate form



From Wikipedia: In calculus and other branches of mathematical analysis, an indeterminate form is an algebraic expression obtained in the context of limits. Limits involving algebraic operations are often performed by replacing subexpressions by their limits; if the expression obtained after this substitution does not give enough information to determine the original limit, it is known as an indeterminate form.




  • The indeterminate forms include $0^{0},\frac{0}{0},(\infty - \infty),1^{\infty}, \ \text{etc}\cdots$




My question is can anyone give me a nice explanation of why $1^{\infty}$ is considered to be an indeterminate form? Because, i don't see any justification of this fact. I am still perplexed.


Answer



Forms are indeterminate because, depending on the specific expressions involved, they can evaluate to different quantities. For example, all of the following limits are of the form $1^{\infty}$, yet they all evaluate to different numbers.



$$\lim_{n \to \infty} \left(1 + \frac{1}{n^2}\right)^n = 1$$



$$\lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^n = e$$




$$\lim_{n \to \infty} \left(1 + \frac{1}{\ln n}\right)^n = \infty$$



To expand on this some (and this thought process can be applied to other indeterminate forms, too), one way to think about it is that there's a race going on between the expression that's trying to go to 1 and the expression that's trying to go to $\infty$. If the expression that's going to 1 is in some sense faster, then the limit will evaluate to 1. If the expression that's going to $\infty$ is in some sense faster, then the limit will evaluate to $\infty$. If the two expressions are headed toward their respective values at essentially the same rate, then the two effects sort of cancel each other out and you get something strictly between 1 and $\infty$.



There are some other cases, too, like
$$\lim_{n \to \infty} \left(1 - \frac{1}{\ln n}\right)^n = 0,$$
but this still has the expression going to $\infty$ "winning." Since $1 - \frac{1}{\ln n}$ is less than 1 (once $n > 1$), the exponentiation forces the limit to 0 rather than $\infty$.


elementary set theory - Bijection between [a,b) and [a,b] intervals.

I need help with finding a bijection between these two intervals: [a,b) and [a,b]. It is told that a,b are in R (real numbers). I know how to construct a bijection if a,b are integers, but i have no idea how to do it if they are real. Any ideas?




Edit: If anyone is still watching, can you tell me if I'm doing this right?



So the bijection will be f(x) = 1) a if x = a and 2) b/p-1 if x != a and x = b/p , where p is in R - {0}.



By taking the p from R - {0} and not from N, I think this could work for real numbers.



Because a < b, every number from a to b can be written as b/p. So the function moves every number upwards if I express myself that way, thus reaching the b.

sequences and series - Recursive square root inside square root problem

I have been debating this issue for days:



I can't find a recursive function of this equation:



$\large{\sqrt{2+\pi \sqrt{3+\pi\sqrt{4+\pi\sqrt{5+\dotsb}}}}}$




has been trying to find a solution this for days now, is what I have achieved so far:



$f(n)=\sqrt{2 f(n-1)}, f(1)=\sqrt{2}$



Unfortunately, I do not know how to move forward,
thanks a lot!

Sunday, September 16, 2018

elementary number theory - Divisibility and congruence.



How to prove that:
$$32 | \phi(51^5) \tag{1}$$
and
$$51^{442} \equiv 2 \mod 31\tag{2}$$
Thanks in advance.



Answer



For the first:
$51 = 3\cdot 17$. Use this to compute
$$\phi(51^5) = \phi(3^5 17^5) = 2 \cdot 3^4 \cdot 16 \cdot 17^4 = 2^5 3^4 17^4$$
Can you see now why $32|\phi(51^5)$?
The second one is false:
$$51^{442} \equiv 20^{22} \equiv 28^{10}\cdot 28 \equiv 9^4 \cdot 9 \cdot 28 \equiv 19^2 \cdot 4 \equiv 20\cdot 4 \equiv 18 \pmod{31}$$
The algorithm used here is called square-and-multiply in case you want to know how to compute such modular powers efficiently; It should be obvious that $2\not\equiv 18\pmod{31}$.
Indeed if you want
$$51^k \equiv 2 \pmod{31}$$
You must chose $k$ such that $k\equiv 3\pmod{15}$, because the discrete logarithm and order are
$$\log_{51}2 \pmod{30} = 3\\

\mathrm{ord}_{31}(51) = 15$$


Importance of the homogeneity assumption in definition of linear map


Let $V$ and $W$ be vector spaces over field $F$. A function $f: V \rightarrow W$ is said to be linear if for any two vectors $x$ and $y$ in $V$ and any scalar $\alpha\in F$, the following two conditions are satisfied:


  1. $f(x + y) = f(x) + f(y)$

  2. $f(\alpha x) = \alpha f(x)$

Let $F$ be a field of real numbers. Is it possible to construct $f$ such that the first condition is satified but not the second one?



Answer



Yes, it is possible to construct examples of $f$ that are additive but not homogeneous. However, the construction won't be explicit and depends on the axiom of choice. See here for details.


If you want an explicit example of an additive map that is not homogeneous over a different field, let $\mathbb{F} = \mathbb{C}$ and consider the map $\varphi \colon \mathbb{C} \rightarrow \mathbb{C}$ given by complex conjugation:


$$ \varphi(z) = \varphi(x + iy) = \overline{z} = x - iy. $$


The map $\varphi$ is additive and $\mathbb{R}$-homogeneous but not $\mathbb{C}$-homogeneous as $\varphi(az) = \overline{az} = \overline{a}\overline{z} = \overline{a}\varphi(z)$.


inequality - How can I prove that $x-{x^2over2}


How can I prove that $$\displaystyle x-\frac {x^2} 2 < \ln(1+x)$$ for any $x>0$


I think it's somehow related to Taylor expansion of natural logarithm, when:


$$\displaystyle \ln(1+x)=\color{red}{x-\frac {x^2}2}+\frac {x^3}3 -\cdots$$


Can you please show me how? Thanks.



Answer



Hint:


Prove that $\ln(1 + x) - x + \dfrac{x^2}2$ is strictly increasing for $x > 0$.


edit: to see why this isn't a complete proof, consider $x^2 - 1$ for $x > 0$. It's strictly increasing; does that show that $x^2 > 1$? I hope not, because it's not true!


finite fields - Construction of addition and multiplication table for GF(4)

I am dealing with finite fields and somehow got stuck. The construction of a prime field $GF(p), p \in \mathbb{P}$ is pretty easy because every operation is modulo p. In other words $GF(p)$ contains all integers ranging from 0 to $p-1$.


However, non prime fields are a bit trickier. Given the power $q = p^n$ with $p \in \mathbb{P}$ and $n \in \mathbb{N}$, one has to find an irreducable polynom g(x) of degree n. Then the construction of $GF(p^n)$ is the following:


$GF(p^n) = \frac{GF(p)[x]}{g}$


Theoretically I get along with this definition. Unfortunately I fail to construct addition and multiplication tables for any GF(q). Though I can easily find the wanted table on the internet, I have not found an explication yet that really made me understand.



I would like to know how to create the addition and multiplication table for $GF(2^2)$ with the above knowledge. $GF(2^2)$ contains four elements. Let's call them $\{0,1, \alpha, \beta \}$. $g$ must be $x^2 + x + 1$ as there no other irreducable polynom of degree 2. So far I am able to construct the addition table partly (question marks indicating despair...):


| + | 0 1 $\alpha$ $\beta$ |


0 | 0 1 $\alpha$ $\beta$ |


1 | 1 0 ? ? |


$\alpha$ | $\alpha$ ? ? ? |


$\beta$ | $\beta$ ? ? ? |


I don't understand how to calculate for example $1+\alpha$. The result is $\beta$, but I don't know why. Concerning the above short explanation, I have to divide $1+\alpha$ by $g$. But how can I do this?


Thanks for your help

Showing that complex exponentials of the Fourier Series are an orthonormal basis

I am revisiting the Fourier transform and I found great lecture notes by Professor Osgood from Standford (pdf ~30MB).



On page 30 and 31 he show that the complex exponentials form an orthonormal basis. I understand the result, but not his calculation. He shows that the inner product of two different exponentials $(e_n (t) = e^{2\pi int}, e_m(t)= e^{2\pi imt})$ with $m \neq n$ is $0$ (He uses round parenthesis to denote the inner product). So, he does the calculation:



$$ (e_n, e_m) = \int_0^1 e^{2\pi int} \overline{e^{2\pi imt}}dt = \dots = \frac{1}{2\pi i(n -m)} (e^{2\pi i(n - m)} - e^0) = \frac{1}{2\pi i(n -m)} (1 - 1) = 0$$



So why is $e^{2\pi i(n - m)} = 1$? And why do I have to look at the case $ m = n$ separately and cannot also just plug it into the last step? (I am aware that I would not get a sensible result then, but it still seems strange). Does this have anything to do because I am using Lebesgue integration and complex numbers? I suppose I have to review some math basics ...

linear algebra - Is there a good intuitive way to understand why matrix B is inverse of A when matrix A|I is turned into I|B


I'm looking for some help with my intuition of basic matrix operations, specifically finding a matrix's inverse (as per my subject line). I have no problems with the steps. The basic row operations are relatively simple. I'd like to understand why/ how this solves the system of linear equations.


I know my question is asking more (or arguably less) than a concrete sequence of steps, a theorem, etc. But I think someone who understands linear algebra much better than I can get through to me better than my texts' treatment, which is little more than a worked example.


Thanks in advance..


Answer



By doing the row operations that change $[A|I]$ to $[I|B]$ stores up the row operations necessary to "untangle" $A$. This compiles those operation into the matrix $B$. You can think of $B$ as an executable program that undoes $A$.


Saturday, September 15, 2018

calculus - Integrating by splitting up trig functions

The problem is $$\int \cos^3 (x) \sin^5 (x) dx.$$



The method we've gone over in class involes splitting up the trig functions so that we can set one to U, then take du so that du matches a part of the function. But I'm struggling to find these. Any explanation is helpful.

combinatorics - Simplify $binom{m+n}{2} - binom{m}{2} - binom{n}{2}$.

Simplify $\binom{m+n}{2} - \binom{m}{2} - \binom{n}{2}$.






I'm confused on how to approach this problem. I can't think of any counting argument that will help me, and any 1-1 correspondence. All solutions are appreciated.

trigonometry - In $ triangle ABC$ show that $ 1 lt cos A + cos B + cos C le frac 32$



Here is what I did, tell me whether I did correct or not:
\begin{align*}
y &= \cos A + \cos B + \cos C\\

y &= \cos A + 2\cos\left(\frac{B+C}2\right)\cos\left(\frac {B-C}2\right)\\
y &= \cos A + 2\sin\left(\frac A2\right)\cos\left(\frac {BC}2\right)
&& \text{since $A+B+C = \pi$}
\end{align*}



Now for maximum value of $y$ if we put $\cos\left(\frac {B-C}2\right) = 1$ then
\begin{align*}
y &\le \cos A + 2\sin\left(\frac A2\right)\\
y &\le 1-2\sin^2\left(\frac A2\right) + 2\sin\left(\frac A2\right)
\end{align*}




By completing the square we get



$$y \le \frac 32 - 2\left(\sin\frac A2 - \frac 12\right)^2$$



$y_{\max} = \frac 32$ at $\sin\frac A2 = \frac 12$ and $y_{\min} > 1$ at $\sin \frac A2>0$ because it is a ratio of two sides of a triangle.



Is this solution correct? If there is a better solution then please post it here. Help!


Answer



We can prove $$\cos A+\cos B+\cos C=1+4\sin\frac A2\sin\frac B2\sin\frac C2$$




Now, $0<\dfrac A2<90^\circ\implies\sin\dfrac A2>0$



For the other part,



$$y=1-2\sin^2\frac A2+2\sin\frac A2\cos\frac{B-C}2$$



$$\iff2\sin^2\frac A2-\sin\frac A2\cdot2\cos\frac{B-C}2+y-1=0$$ which is a Quadratic equation in $\sin\dfrac A2$ which is real



So, the discriminant must be $\ge0$




i.e., $\left(2\cos\dfrac{B-C}2\right)^2-4\cdot2(y-1)\ge0$



$\iff 4y\le4+2\cos^2\dfrac{B-C}2=4+1+\cos(B-C)\le4+1+1$



The equality occurs iff $\cos(B-C)=1\iff B=C$ as $0

where $\sin\dfrac A2=\dfrac12\implies\dfrac A2=30^\circ$ as $0<\dfrac A2<90^\circ$


geometry - Vector path length of a hypotenuse

Figure 1



Consider the red path from A that zigzags to B, which takes $n$ even steps of length $w$. The path length of the route $P_n$ will be equal to:




$ P_n = P_x + P_y = \frac{n}{2}\times w + \frac{n}{2}\times w = n \times w $



But $\frac{n}{2}\times w = 1$ beacuse it is the length of one of the sides of the triangle so:



$P_n = 2$



Which will be true no matter how many steps you take. However in the limit $n \to \infty, w \to 0$ the parth length $P_\infty$ suddenly becomes:



$P_\infty = \sqrt{1^2 + 1^2} = \sqrt{2}$




Due to Pythagoras. Why is this the case? It seems the path length suddenly decreases by 0.59!

Thursday, September 13, 2018

complex analysis - The idea behind the proof of Riemann mapping theorem



The proof of Riemann mapping theorem (if $U\subsetneq\mathbb C$ is simply connected then it's conformally equivalent to the unit disk $D$) goes roughly as follows:




  • Consider the family $F$ of all injective functions $U\rightarrow D$ taking a given point $a$ to $0$.

  • Show that $F$ is nonempty.

  • The map $f\mapsto |f'(a)|$ is continuous and $F$ is uniformly bounded, so by Montel's theorem there is an $f_0\in F$ maximizing $|f_0'(a)|$.

  • Show that $f_0$ is onto $D$.




The two tricky bits of the proof are the second and fourth bullets. Here are the ideas behind the proofs of the two:




  • We may WLOG assume $0\not\in U$. Then there is a branch $g$ of a square root defined on $U$. Then $g$ is injective, $-g(U)$ is open and disjoint from $g(U)$ so $g(U)$ is disjoint from some closed disk. The complement of that disk can be then mapped into $D$, and composing with suitable Mobius transformation maps $a$ to $0$.

  • Suppose $b\in D\setminus f_0(U)$. Then the map
    $$\psi(z)=\sqrt{\frac{f_0(z)-b}{1-\overline{b}f_0(z)}}$$
    (the inside function doesn't vanish, so we can find a branch of its square root on $U$) takes $U$ into the unit disk and hence
    $$h(z)=\frac{\psi(z)-\psi(a)}{1-\overline{\psi(a)}\psi(z)}$$
    is in $F$. We then find

    $$|h'(a)|=\frac{1+|b|}{2\sqrt{|b|}}|f_0'(a)|>|f_0'(a)|$$
    contradicting the choice of $f_0$.



I can see the reason behind proving the former the way we do - we want to map $U$ into a set the complement of which has nonempty interior. The square root suits this perfectly since $g(U)$ and $-g(U)$ are always disjoint for a branch of square root (we could similarly take a branch of logarithm, then $g(U)$ and $g(U)+2\pi i$ are disjoint).



However, the last part of the proof feels to me like a magic trick. I thought about it several times, but I couldn't figure out anything to explain this course of action in the proof. Clearly we would like to increase a derivative somehow, but confirming this with the function constructed above requires a computational effort, not something visible at a glance.



I was wondering, how to "justify" construction of this function in the last part of the proof (possibly by some geometric argument)? Alternatively, is it possible to argue towards existence of a function with a larger derivative, without necessarily constructing it?


Answer




Think of the hyperbolic distance. A holomorphic $g \colon D \to D$ never increases the hyperbolic distance, and if there is one pair of points $a \neq b$ such that the hyperbolic distance between $g(a)$ and $g(b)$ equals the hyperbolic distance between $a$ and $b$, then $g$ is an automorphism of the disk.



So $z \mapsto z^2$ strictly decreases the hyperbolic distance. Hence, if we have a domain $V \subset D$ on which a branch $r$ of the square root is defined (in particular a simply connected domain not containing $0$), then $r$ strictly increases the hyperbolic distance between any two points in $V$. This remains true when we compose $r$ with automorphisms of $D$, since those leave hyperbolic distance invariant. Now $W = f_0(U)$ is a simply connected proper subdomain of $D$, so we can use an automorphism $T$ of $D$ to move it so that the image doesn't contain $0$, apply $r$, and then apply another automorphism $S$ of $D$, so that $(S\circ r \circ T)(0) = 0$. Then $h = S\circ r \circ T\colon W \to D$ strictly increases hyperbolic distance, and fixes $0$. That implies $\lvert h'(0)\rvert > 1$.


Wednesday, September 12, 2018

calculus - Functional equation $f(xy)=f(x)+f(y)$ and continuity




Prove that if $f:(0,\infty)→\mathbb{R}$ satisfying $f(xy)=f(x)+f(y)$, and if $f$ is continuous at $x=1$, then $f$ is continuous for $x>0$.




I let $x=1$ and I find that $f(x)=f(x)+f(1)$ which implies that $f(1)=0$. So, $\lim_{x\to1}f(x)=0$, but how can I use this to prove continuity of $f$ for every $x \in \mathbb R$?




Any help would appreciated. Thanks


Answer



Give $x_0>0$,
$$f(x)-f(x_0)=f\left(x_0\cdot\frac{x}{x_0}\right)-f(x_0)=f\left(\frac{x}{x_0}\right),$$
by $f$ is continuous at $x=1$, when $x\to x_0$, $\frac{x}{x_0}\to1$, then
$$\lim\limits_{x\to x_0}f(x)=f(x_0).$$


algebra precalculus - The drying water melon puzzle


I couldn't find an explanation to this problem that I could understand.


A watermelon consist of 99% water and that water measures 2 litre. After a day in the sun the water melon dries up and now consist of 98% water. How much water is left in the water melon?


I know the answer is ~1 litre, but why is that? I've read a couple of answers but I guess I'm a bit slow because I don't understand why.


EDIT
I'd like you to assume that I know no maths. Explain it like you would explain it to a 10 year old.


Answer



At the beginning the solid material is $1\%$ of the total which is a trifle (to be neglected) more than $1\%$ of $99\%$ of the total, or $1\%$ of $2000\ {\rm cm}^3$. Therefore the solid material has volume $\sim20\ {\rm cm}^3$.


After one day in the sun these $20\ {\rm cm}^3$ solid material are still the same, but now they make up $2\%$ of the total. Therefore the total now will be $1000\ {\rm cm}^3$ or $1$ litre. $98\%$ of this volume, or almost all of it, will be water.


calculus - Can we determine an oblique asymptote of a function by the limit of $f'(x)$?


Some references show that to find an oblique asymptote of a function $f(x)$, we must see the limit of $$ m = \lim_{x \rightarrow \pm \infty} \frac{f(x)}{x} $$ If $m \ne 0$ and finite, then there is an oblique asymptote of the form $y = mx + c$. However, I think it would be more intuitive by searching the limit of $$ \lim_{x \rightarrow \pm \infty} f'(x) $$ If this limit exists, then we can determine the asymptote.


Question : Am I correct if I generalize the 2nd one for finding an oblique asymptote?


I have not seen any reference to use the second one (limit of $f'$) for finding an oblique asymptote. But it is more intuituive.., and we can also see from the first one that $\lim \limits_{x \rightarrow \pm \infty} \frac fx $ has an indefinite form $\frac{\infty}{\infty} $, then by L'Hopital it can be equal to $\lim f'(x)$.


Thanks in advance.


Answer




For a more straightforward counterexample, take $f(x) = \ln(x)$.


Its derivative $f'(x)=\frac{1}{x}$ limits to $0$ as $x \to +\infty$.


But this function has no horizontal asymptote. In fact, $\lim_{x \to \infty} \ln(x) = +\infty$ so the graph goes arbitrarily far above every horizontal line as $x \to +\infty$.


You can modify this example in many ways. For instance, $f(x) = x + \ln(x)$ has derivative limiting to $1$ as $x \to +\infty$, but the graph of $y=f(x)$ goes arbitrarily far above every slope 1 line as $x \to +\infty$, hence it has no slope 1 asymptote.


abstract algebra - The square roots of different primes are linearly independent over the field of rationals


I need to find a way of proving that the square roots of a finite set
of different primes are linearly independent over the field of
rationals.





I've tried to solve the problem using elementary algebra
and also using the theory of field extensions, without success. To
prove linear independence of two primes is easy but then my problems
arise. I would be very thankful for an answer to this question.

trigonometry - How is $frac{sin(x)}{x} = 1$ at $x = 0$


I have a function: $$\text{sinc}(x) = \frac{\sin(x)}{x}$$ and the example says that: $\text{sinc}(0) = 1$, How is it true?


I know that $\lim\limits_{x \to 0} \frac{\sin(x)}{x} = 1$, But the graph of the function $\text{sinc}(x)$ shows that it's continuous at $x = 0$ and that doesn't make sense.


Answer



In an elementary book, they should define $\mathrm{sinc}$ like this $$ \mathrm{sinc}\; x = \begin{cases} \frac{\sin x}{x}\qquad x \ne 0 \\ 1\qquad x=0 \end{cases} $$ and then immediately prove that it is continuous at $0$.


In a slightly more advanced book, they will just say $$ \mathrm{sinc}\;x = \frac{\sin x}{x} $$ and the reader will understand that removable singularities should be removed.


Tuesday, September 11, 2018

functions - Bijection from $[0,1]^3$ to $[0,1]$?


Is there any bijection from $[0,1]^3$ to $[0,1]$? How can I construct it?


Answer



Hint:


If there exists a surjection between $A$ to $B$ and a surjection between $B$ to $A$, then there exists a bijection between $A$ to $B$. In your case, space filling curves are surjections from $[0,1]$ to $[0,1]^3$. It should be easy to find a surjection going the other way.


integration - A direct proof for $int_0^x frac{- x ln(1-u^2)}{u sqrt{x^2-u^2}} , mathrm{d} u = arcsin^2(x)$


I have been trying to evaluate $$ f(x) \equiv \int \limits_0^\infty - \ln\left(1 - \frac{x^2}{\cosh^2 (t)}\right) \, \mathrm{d} t $$ for $x \in [0,1]$ and similar integrals recently. I know that $$ \int \limits_0^\infty \frac{\mathrm{d} t}{\cosh^z (t)} = \frac{2^{z-2} \Gamma^2 (\frac{z}{2})}{\Gamma(z)} $$ holds for $\operatorname{Re} (z) > 0$, so by expanding the logarithm I found that $$ f(x) = \frac{1}{2} \sum \limits_{n=1}^\infty \frac{(2n)!!}{n^2 (2n-1)!!} x^{2n} \, .$$ But the right-hand side is the power series of the arcsine squared, so $f(x) = \arcsin^2 (x)$.


On the other hand, the substitution $u = \frac{x}{\cosh(t)}$ in the original integral leads to the representation $$ f(x) = \int \limits_0^x \frac{- x \ln(1-u^2)}{u \sqrt{x^2-u^2}} \, \mathrm{d} u \, ,$$ for which Mathematica (or WolframAlpha if you're lucky) gives the correct result.


I would like to compute this integral without resorting to the above power series and thereby find an alternative proof for the expansion. I have tried to transform the integral into the usual form $$ \arcsin^2 (x) = \int \limits_0^x \frac{2 \arcsin(y)}{\sqrt{1-y^2}} \, \mathrm{d} u $$ and thought about using the relations $$ \arcsin(x) = \arctan\left(\frac{x}{\sqrt{1-x^2}}\right) = 2 \arctan\left(\frac{x}{1+\sqrt{1-x^2}}\right) \, , $$ but to no avail. Maybe the solution is trivial and I just cannot see it at the moment, maybe it is not. Anyway, I would be grateful for any ideas or hints.


Answer



I have finally managed to put all the pieces together, so here's a solution that does not use the power series:


Let $u = x v$ to obtain $$ f(x) = \int \limits_0^1 \frac{- \ln(1 - x^2 v^2)}{v \sqrt{1-v^2}} \, \mathrm{d} v \, . $$ Now we can differentiate under the integral sign (justified by the dominated convergence theorem) and use the substitution $v = \sqrt{1 - w^2}\, .$ Then the derivative is given by \begin{align} f'(x) &= 2 x \int \limits_0^1 \frac{v}{(1-x^2 v^2) \sqrt{1-v^2}} \, \mathrm{d} v = 2 x \int \limits_0^1 \frac{\mathrm{d} w }{1-x^2 + x^2 w^2} \\ &= \frac{2}{\sqrt{1-x^2}} \arctan \left(\frac{x}{\sqrt{1-x^2}}\right) = \frac{2 \arcsin (x)}{\sqrt{1-x^2}} \end{align} for $x \in (0,1)$. Since $f(0)=0 \, ,$ integration yields $$ f(x) = f(0) + \int \limits_0^x \frac{2 \arcsin (y)}{\sqrt{1-y^2}} \, \mathrm{d} y = \arcsin^2 (x)$$ for $x \in [0,1]$ as claimed.



trigonometry - If $x costheta+ysintheta=a$ and $xsintheta-ycostheta=b$, then $tantheta=frac{bx+ay}{ax-by}$. (Math Olympiad)

I tried to solve it but I can’t get the answer. Please help me in proving this trig identity:




If
$$x \cos\theta+y\sin\theta=a$$
$$x\sin\theta-y\cos\theta=b$$
then $$\tan\theta=\frac{bx+ay}{ax-by}$$





I've spent many hours trying.
Thanks in advance.

Multiplicative inverses for elements in field


How to compute multiplicative inverses for elements in any simple (not extended) finite field? I mean an algorithm which can be implemented in software.


Answer



The unit group of the finite field of order $q$ is a cyclic group of order $q-1.$ Thus, for any $a \in \mathbb{F}_q^{\times},$ $$a^{-1} = a^{q-2}.$$


abstract algebra - Greatest common divisor of $X^n-1$ and $X^m-1$

Let $f=X^n-1$ and $g=X^m-1$ be two polynomials. Show that: $$\left(f,g\right)=X^{\left(n,m\right)}-1,$$ where $\left(a,b\right)=$ greatest common divisor of $a$ and $b$.

Monday, September 10, 2018

modular arithmetic - How to compute $(a+1)^bpmod{n}$ using $a^bpmod{n}$?

As we know, we can compute $a^b \pmod{n}$ efficiently using Right-to-left binary method Modular exponentiation.
Assume b is a prime number .
Can we compute directly $(a+1)^b\pmod{n}$ using $a^b\pmod{n}$?

real analysis - Why differentiability implies continuity, but continuity does not imply differentiability?


Why does differentiability implies continuity, but continuity does not imply differentiability?




I am more interested in the part about a continuous function not being differentiable.



Well, all I could find in regards to why continuous functions can not be differentiable were counter- examples...




I just wanted to know if there was a more detailed way of explaining this.

complex numbers - Why doesn't $(x^a)^b$ always equal $x^{ab}$



In middle school, I was taught that $(x^a)^b = x^{ab}$. Of course, we were only looking at $a, b \in \mathbb{N}$.



So today I was playing around with squares, and I tried to do this:
$$(x^\sqrt{2})^{\sqrt{2}} = x^{\sqrt{2}\cdot\sqrt{2}} = x^2$$
But then doing some tests in Wolfram alpha told me that I'm wrong for $x<0$. I know that we're dealing with complex numbers here, but I don't see why this would break that law.




My question is: Why can't I use this law in this situation? Is there an intuitive explanation for this?


Answer



For $a>0$ and $b\in \Bbb N$, $a^0 = 1$ and $a^b = a \times \cdots \times a$ ($b$ times) for $b \ge 1$. For $b \in\Bbb Z$, $a^b = \frac1{a^{-b}}$. For $b\in \Bbb R \setminus \Bbb Z$:




$$a^b = \exp(b \ln a)$$




No need to freak out, though, because all the rules you like still hold. Note that there are other possible ways to define $a^b$ when $b$ is non-integer real number, but this is the simplest one of them.




However, for $a < 0$,




$$a^b = \exp(b \log a)$$




where $\log$ is (some branch of) the complex logarithm function, which is defined for $z \in \Bbb C^*$ as:





$$\log z = \ln|z| + i\arg(z)$$




where some branch of $\arg$ is pre-chosen. And $\exp$ is the complex exponential function.



Now consider that for $a<0$, one has:



$$a^{bc} = \exp(bc \log a), \text{ and} \\ (a^b)^c = \exp(c\log(a^b))$$



These two would be equal only if $\log a^b = b \log a$, but unfortunately that's not true. For instance, assuming that we are dealing with the principal branch of logarithm, $\log((-1)^2) = \log 1 = 0$, while $\log(-1) = \ln|-1| + i\arg(-1) = i\pi$, and $0 \neq 2i\pi$.



exponentiation - Why is $0^0$ also known as indeterminate?




I've seen on Maths Is Fun that $0^0$ is also know as indeterminate. Seriously, when I wanted to see the value for $0^0$, it just told me it's indeterminate, but when I entered this into the exponent calculator, it was misspelled as "indeterminant". Let's cut to the chase now. I've seen this:$$5^0=1$$$$4^0=1$$$$3^0=1$$$$2^0=1$$$$1^0=1$$$$0^0=1$$and this:$$0^5=0$$$$0^4=0$$$$0^3=0$$$$0^2=0$$$$0^1=0$$$$0^0=0$$Right here, it seems like $0^0$ can be equal to either $0$ or $1$ as proven here. This must be why $0^0$ is indeterminate. Do you agree with me? I can't wait to hear your "great answers" (these answers have to go with all of your questions (great answers). What I mean is that you have to have great answers for all of your questions)!


Answer



Well, any number raised to the power of zero does equal $1$ because the base, or the number being raised to any power, gets divided by itself. For example, $3^0$ equals 3/3, which equals $1$, but $0^0$ "equals" 0/0, which equals any number, which is why it's indeterminate. Also, 0/0 is undefined because of what I just said.


galois theory - Looking for GF(16), GF(32) ... GF (256) tables

I'm learning about Galois Fields by implementing the code to create the addition/multiplication/log/ilog tables. I've got working code but I cannot find many of the actual Galois Field tables online to verify I'm calculating the tables correctly.




I've got GF(8) from Wolfram Alpha. However, when I try GF(16), GF(32), etc. I get grayscale images instead of actual tables, presumably because the tables themselves become too unwieldy.



Is there a site online where these tables are listed? Or, source code that has these tables? Or alternatively, open source software that I can run to generate these tables?

summation - Induction proof concerning a sum of binomial coefficients: $sum_{j=m}^nbinom{j}{m}=binom{n+1}{m+1}$

I'm looking for a proof of this identity but where j=m not j=0



http://www.proofwiki.org/wiki/Sum_of_Binomial_Coefficients_over_Upper_Index


$$\sum_{j=m}^n\binom{j}{m}=\binom{n+1}{m+1}$$

linear algebra - Cannot get a vector that is not image under transformation




I am trying to solve following question... (click below link)



Question



And here is answer for this question... (click below link)



Answer



I can actually be able to solve the first question, which is about one-to-one. All I have to do is to make them equal to 0, and solve for x, y, z. And give any vector based on relationship I found.




For the 2nd question, which is about onto function, I know how to determine if it is onto or not (Just simply use row echelon form and see if its rank isn't 3 since it is about transformation to 3 dimension).



However, I don't know how to get a vector like [5, -10, -5] as shown in its answer image above. I know there are many answers. But I just simply don't know how to get any of those vectors which is not image under T. Could you explain me how to get any vectors that is answer of this question? in the simplest way.



Thank you very much if you can help me!


Answer



The range of the linear transformation defined by $A$ is equal to the column space of $A$. In order to find a vector that is not in the column space, you can look at a row echelon form of $A$:



$$ B = \begin{pmatrix} 1 & 0 & 2 & -4 \\
0 & 1 & 0 & -5 \\

0 & 0 & 0 & 0 \end{pmatrix}. $$



Since the rank of the matrix is not full, there will be columns in the row echelon form that will satisfy a linear relation. For example, we have $C_3 = 2C_1$ and $-4 C_1 - 5 C_2 = C_4$.



Now, since performing row operations changes the column space, the column space of $A$ and the column space of $B$ are not identical! However, any linear relation that holds between the columns of $A$ will also hold between the corresponding columns of $B$ and vice versa. How does this help us? Since the column space of $B$ is spanned by the first two columns, the same holds for $A$ so in order to find a vector that is not in the column space of $A$, we need to find a vector that is linearly independent of



$$ \begin{pmatrix} 5 \\ 1 \\ -2 \end{pmatrix}, \begin{pmatrix} -5 \\ -1 \\ 3 \end{pmatrix}. $$



Now, note that both vectors above satisfy $5y - x = 0$ and thus any linear combination of them must satisfy $5y - x = 0$. In order to get a vector that is linearly independent from both vectors, we can take any vector $(x,y,z)^T$ that doesn't satisfy $5y - x = 0$. For example, $(0,1,0)^T$.


Hot Linked Questions

Show that $\gcd(2^m-1, 2^n-1) = 2^ {\gcd(m,n)} -1$ [duplicate]


Possible Duplicate: Prove that $\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$ $\gcd(b^x - 1, b^y - 1, b^ z- 1,…) = b^{\gcd(x, y, z,…)} -1$ I'm trying to figure this out: ...





Sunday, September 9, 2018

real analysis - How to prove $B$ is the least possible upper bound?


I saw the following sequence-



$S_n=\frac12,\frac23,\frac34,\frac45,\frac56,\ldots,\frac{n-1}{n}$.





Now, I may say that-



$$ (A)\leq S_n\leq(B),$$



where, $A=\frac12$ and $B=1$



Now, am I correct in saying that $B$ is the least possible upper bound?



If yes, how do I prove it?




Thanks for any help.

combinatorics - Another Hockey Stick Identity



I know this question has been asked before and has been answered here and here.



I have a slightly different formulation of the Hockey Stick Identity and would like some help with a combinatorial argument to prove it. First I have this statement to prove:

$$
\sum_{i=0}^r\binom{n+i-1}{i}=\binom{n+r}{r}.
$$
I already have an algebraic solution here using the Pascal Identity:
$$
\begin{align*}
\binom{n+r}{r}&=\binom{n+r-1}{r}+\binom{n+r-1}{r-1}\\
&=\binom{n+r-1}{r}+\left[\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-1)-1}{r-2}\right]\\
&=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\left[\binom{n+(r-2)-1}{r-2}+\binom{n+(r-2)-1}{(r-2)-1}\right]\\
&\,\,\,\vdots\\

&=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-2)-1}{(r-2)-1}+\binom{n+(r-3)-1}{r-3}+\cdots+\left[\binom{n+1-1}{1}+\binom{n+1-1}{0}\right]\\
&=\binom{n+r-1}{r}+\binom{n+(r-1)-1}{(r-1)}+\binom{n+(r-2)-1}{(r-2)-1}+\binom{n+(r-3)-1}{r-3}+\cdots+\binom{n+1-1}{1}+\binom{n-1}{0}\\
&=\sum_{i=0}^r\binom{n+i-1}{i}.
\end{align*}
$$



I have read both combinatorial proofs in the referenced answers above, but I cannot figure out how to alter the combinatorial arguments to suit my formulation of the Hockey Stick Identity. Basically, this formulation gives the "other" hockey stick. Any ideas out there?


Answer



Note that $\binom{n+r}{r}=\binom{n+r}{n}$ is the number of subsets of $\{1,2,\ldots,n+r\}$ of size $n$. On the other hand, for $i=0,1,2,\ldots,r$, $\binom{n+i-1}{i}=\binom{n+i-1}{n-1}$ is the number of subsets of $\{1,2,\ldots,n+r\}$ of size $n$ whose largest element is $n+i$.


integration - $int^infty_0 frac{dx}{x^n + 1}$, n odd

Well, I saw this integral :



$$\int^\infty_0 \frac{dx}{x^n + 1}$$



on some questions (like this one : Calculating a real integral using complex integration )




But it has always been for $n$ even.
Is there something wrong with $n$ odd ? Is not, how do you compute this ? by the same argument ?



thank you and sorry if this is easy. I can't figure out how to do it since it's neither odd nor even.

analysis - If $f(x + y) = f(x) + f(y)$ showing that $f(cx) = cf(x)$ holds for rational $c$


For $f:\mathbb{R}^n \to \mathbb{R}^m$, if $f(x + y) = f(x) + f(y)$ for then for rational $c$, how would you show that $f(cx) = cf(x)$ holds?


I tried that for $c = \frac{a}{b}$, $a,b \in \mathbb{Z}$ clearly $$ f\left(\frac{a}{b}x\right) = f\left(\frac{x}{b}+\dots+\frac{x}{b}\right) = af\left(\frac{x}{b}\right) $$ but I can't seem to finish it, any help?



Answer



Try computing $bf(x/b){}{}{}$.


algebra precalculus - Is $pi = 4$ really?

Can anyone explain what's wrong with this?



enter image description here

Saturday, September 8, 2018

discrete mathematics - How to compute: $(89^{3} bmod 79)^4bmod 26$?

How to compute: $(89^{3} \bmod 79)^4\bmod 26$??



It's easy to calculate it by evaluating $89^{3}$ first and then mod $79$, but it seems stupid to do it this way. Do we have a faster way to evaluate it?

fine the limits :$lim_{x to 0} frac{(sin 2x-2xcos x)(tan 6x+tan(frac{pi}{3}-2x)-tan(frac{pi}{3}+4x))}{xsin x tan xsin 2x}=?$


fine the limits-without-lhopital rule and Taylor series :


$$\lim_{x \to 0} \frac{(\sin 2x-2x\cos x)(\tan 6x+\tan(\frac{\pi}{3}-2x)-\tan(\frac{\pi}{3}+4x))}{x\sin x \tan x\sin 2x}=?$$


i know that :


$$\lim_{x \to 0} \frac{\sin x}{x}=1=\lim_{x \to 0}\frac{\tan x}{x}$$


But I can not answer please help .


Answer




If you know, that $\enspace\displaystyle \lim\limits_{x\to 0}\frac{1}{x^2}(1-\frac{\sin x}{x})=\frac{1}{3!}\enspace$ then you can answer your question easily:


$\displaystyle \frac{(\sin(2x)-2x\cos x)(\tan(6x)+\tan(\frac{\pi}{3}-2x)-\tan(\frac{\pi}{3}+4x))}{x\sin x\tan x\sin(2x)}=$


$\displaystyle =\frac{(\sin(2x)-2x\cos x)(\frac{\sin(6x)}{\cos(6x)}-\frac{\sin(6x)}{\cos(\frac{\pi}{3}-2x)\cos(\frac{\pi}{3}+4x)})}{x\sin x\tan x\sin(2x)}$


$\displaystyle =\frac{2\sin x\cos x -2x\cos x}{\sin x\tan x\sin(2x)}6\frac{\sin(6x)}{6x}(\frac{1}{\cos(6x)}-\frac{1}{\cos(\frac{\pi}{3}-2x)\cos(\frac{\pi}{3}+4x)})$


$\displaystyle =-\frac{1}{x^2}(1-\frac{\sin x}{x}) (\frac{x}{\sin x}\cos x)^2 \frac{2x}{\sin(2x)} 6\frac{\sin(6x)}{6x}(\frac{1}{\cos(6x)}-\frac{1}{\cos(\frac{\pi}{3}-2x)\cos(\frac{\pi}{3}+4x)})$


$\displaystyle \to -\frac{1}{3!}6(1-4)=3\enspace$ for $\enspace x\to 0$


A note about what I have used:


$\displaystyle \tan x=\frac{\sin x}{\cos x}$


$\sin(2x)=2\sin x\cos x$


$\displaystyle \tan x-\tan y=\frac{\sin(x-y)}{\cos x\cos y}$



algebra precalculus - Solve for x (quadratic equation)

How do you solve the following equation:



\begin{align*}
\sqrt{2017 + \sqrt{2017 - x}} &= x
\end{align*}



I tried squaring it twice, but then I am left with quadratic equation that I can not solve.

Induction proof for a summation: $sum_{i=1}^n i^3 = left[sum_{i=1}^n iright]^2$




Prove by induction: $\sum_{i=1}^n i^3 = \left[\sum_{i=1}^n i\right]^2$. Hint: Use $k(k+1)^2 = 2(k+1)\sum i$.



Basis: $n = 1$ $\sum_{i=1}^1 i^3 = \left[\sum_{i=1}^1 i\right]^2 \to 1^3 = 1^2 \to 1 = 1$.




Hypothesis: Assume true for all $n \le k$.



So far I have the following:



$$\sum_{i=1}^{k+1} i^3 = (k+1)^3 + \sum_{i=1}^k i^3$$



$$(k+1)^3 + \left[\sum_{i=1}^k i\right]^2$$


Answer



For $n=k+1$, $$\sum_{i=1}^{k+1}i^3 = \sum_{i=1}^{k}i^3+(k+1)^3=(\sum_{i=1}^{k}i)^2+(k+1)^3=(\sum_{i=1}^{k}i)^2+k(k+1)^2+(k+1)^2$$




Now using the Hint: $k(k+1)^2 = 2(k+1)\sum i$.



$$=(\sum_{i=1}^{k}i)^2+2(k+1)\sum_{i=1}^k i+(k+1)^2=(\sum_{i=1}^{k+1}i)^2$$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...