Wednesday, July 31, 2019

real and imaginary part of square root of any complex number




let $k\in\mathbb{C}$. What are the real and imaginary parts of any complex number $x$ so that $x^2=k$ ?



My first idea was writing $k$ and $x$ in polar form: $x=r(\cos{\phi}+i\sin{\phi})$;$k=r'(\cos{\psi}+i\sin{\psi})$. Then use De Moivre's formula such that: $x^2=r^2(\cos{2\phi}+i\sin{2\phi})=r'(\cos{\psi}+i\sin{\psi})$.



Any hints how to go on ?



Another idea could be using roots of unity: We know how $x$ looks like when $x^n=1$



Answer



Well, you just answered yourself.



If $r_1 e^{i\theta_1} = r_2 e^{i \theta_2} $ then $r_1=r_2 , \theta_1 = \theta_2 + 2\pi n $. That means in this case that



$$ r^2 = r' \Rightarrow r=\sqrt{r'}$$
$$ 2\phi = \psi + 2\pi n \Rightarrow \phi = \frac{\psi}{2} , \frac{\psi}{2} + \pi $$
Meaning the solution will be $z= \pm \sqrt{r} (\cos \frac{\psi}{2} + i \sin \frac{\psi}{2})$


calculus - How to Integrate $int_0^pi ln(1+alpha cos(x)) ,mathrm{d}x$

I've been trying to learn how to integrate by differentiation under the integral. I've made good progress on some problems, but I seem to not be able to get an answer for $$f(\alpha)=\int_0^\pi \ln(1+\alpha \cos(x)) \,\mathrm{d}x$$



I've managed to get as far as $$f'(\alpha)=\int_0^\pi \frac{\cos(x)}{1+ \alpha \cos(x)} \,\mathrm{d}x$$




But this seems like a ridiculous integral to try and integrate by elementary methods, indeed an integral calculator returns $$\dfrac{x}{a}+\dfrac{\ln\left(\left|\left(a-1\right)\tan\left(\frac{x}{2}\right)-\sqrt{a^2-1}\right|\right)-\ln\left(\left|\left(a-1\right)\tan\left(\frac{x}{2}\right)+\sqrt{a^2-1}\right|\right)}{a\sqrt{a^2-1}}$$



Hopefully someone can advise on whether I've already made a mistake in my working, or whether I've just completely misunderstood the method.

trigonometry - Is Euler's formula valid for complex arguments



I found this question here :



Evaluate $\cos(z)$, given that $z = i \log(2+\sqrt{3})$



It says that -
$$e^{-iz} = \cos(z) - i \sin(z)$$
isn't necessarily true because $$\sin z$$ is imaginary (for the particular value of z in the link). But, the proof of Euler's formula using Maclaurin series suggests that it should be valid for complex arguments too. What am I missing?



Answer



No, it doesn't say that $$e^{iz}=\cos(iz)-i\sin(iz)$$is not necessarily true! It is true. What it says in that other answer is that $\cos(iz)$ is not necessarily the real part of $e^{iz}$.


real analysis - proof: convergence of recursive sequence (Assignment)



Question: Recursively define a sequence by $x_1=1$ and $x_{n+1}=(\sqrt2)^{x_n}$.

Prove that the sequence $\{x_n\}_{n=1}^{\infty}$ converges.



Attempt: To prove its convergence, I have to show the sequence is bounded and monotone.



I can prove the sequence $x_n\ge 1$ by induction.



I can prove the sequence is monotone increasing $x_n\le x_{n+1}$ by induction.



Since it is monotone increasing, I need to show the sequence is bounded above, but I don't know how to find this.




Could you give some idea? By the way, it is an assignment question.


Answer



Note that $a^b \le a^c$ for $1\le b \le c$ and $a\ge 1$.



$|x_{2}|=(\sqrt{2})^1 \le (\sqrt{2})^2=2$.



$|x_{3}|=(\sqrt{2})^{x_2} \le (\sqrt{2})^{|x_2|} \le (\sqrt{2})^2=2$.



.
.

.



$|x_{n}| \le 2$.



I'm only just restating what User L KM wrote in their answer.


Tuesday, July 30, 2019

pi - This proof is wrong. How? (See image in link below)




Came across this on the internet. I have some ideas regarding this but I wanted to know more such reasonings.
Proof of Pi


Answer



Fundamentally, you have jumped in without a definition of the length of an arc.



The circumference isn't approximated by the sum of lengths of the lines drawn as shown in the image, but by the sum of the hypotenuses of the right-angled triangles formed around the edge of the circle.


elementary set theory - Cardinal arithmetic confusion: What is $|Bbb R|+|Bbb N|$?

I do not understand how to calculate addition two cardinals.
I know that the formula as follows:




if $\alpha$ and $\beta$ are two cardinals, then



$\alpha + \beta= |\{(a,0):a\in \alpha\}\cup\{(b,1):b\in\beta\}|$.



If $\alpha=\mathbb{R}$ and $\beta=\mathbb{N}$, what's cardinals of $\alpha+\beta$?

Monday, July 29, 2019

What is wrong in this proof that $pi=2$ or $x=2$?



Let us consider the number $$\Large\pi^{\pi^\pi}=\pi^{\pi\cdot\pi}=\pi^{\pi^2}$$



As the bases are equal, the exponents must be equal, So $$\pi=2$$




You can take any $x$ instead of $\pi$.



What is wrong in this proof?


Answer



Lets write $a \uparrow b$ to mean $a^b$.



Then the following reasoning is correct: $$(\pi \uparrow \pi)\uparrow \pi = \pi \uparrow (\pi \cdot \pi) = \pi \uparrow (\pi \uparrow 2)$$



However, we cannot necessarily deduce that the RHS equals




$$(\pi \uparrow \pi) \uparrow 2$$



because exponentiation isn't associative. Indeed, Google calculator tells me that:




  • $\pi \uparrow (\pi \uparrow 2) \approx 80662.6659386$


  • $(\pi \uparrow \pi) \uparrow 2 \approx 1329.48908322$





so if the calculator is correct to even the first decimal place, then



$$(\pi \uparrow \pi) \uparrow 2 \neq \pi \uparrow (\pi \uparrow 2).$$



Moral of the story: if in doubt, find better notation!


functional equations - Show that $f$ is a Cauchy function

Let $f \colon \mathbb{R} \to \mathbb{R}$ be a solution of the functional equation
$$|f(x + y)| = |f(x)| + |f(y)| \quad \forall x,y \in\mathbb{R}.$$
Show that $f$ is an additive function.

elementary number theory - Divisibility and congruence.



How to prove that:
$$32 | \phi(51^5) \tag{1}$$
and

$$51^{442} \equiv 2 \mod 31\tag{2}$$
Thanks in advance.


Answer



For the first:
$51 = 3\cdot 17$. Use this to compute
$$\phi(51^5) = \phi(3^5 17^5) = 2 \cdot 3^4 \cdot 16 \cdot 17^4 = 2^5 3^4 17^4$$
Can you see now why $32|\phi(51^5)$?
The second one is false:
$$51^{442} \equiv 20^{22} \equiv 28^{10}\cdot 28 \equiv 9^4 \cdot 9 \cdot 28 \equiv 19^2 \cdot 4 \equiv 20\cdot 4 \equiv 18 \pmod{31}$$
The algorithm used here is called square-and-multiply in case you want to know how to compute such modular powers efficiently; It should be obvious that $2\not\equiv 18\pmod{31}$.
Indeed if you want
$$51^k \equiv 2 \pmod{31}$$

You must chose $k$ such that $k\equiv 3\pmod{15}$, because the discrete logarithm and order are
$$\log_{51}2 \pmod{30} = 3\\
\mathrm{ord}_{31}(51) = 15$$


exponentiation - 0's Exponents are impossible?




I've had something that's been bugging me, and I tried research and asked my math teacher. None had sufficient answers.
The concept of $0$ is that when $0$ goes to any exponent except for $0$, it becomes $0$. For example,



$0^3 = 0$, but

$0^0 =$ undefined



However, the proof that $0^0$ is undefined is shown thus:
$0^x$
... (divided by) = $0^{(x-x)} = 0^0$ = undefined
$0^x$



You can apply this to any exponent though, such as:
$0^6$
... = $0^6 = 0$ and $0^3 = 0$, so this expression is equal to $0/0$, which should be

$0^3$ undefined, right?



Am I doing something wrong here? Please help!
Gil


Answer



One should not say "equals undefined"; one should say "is undefined". The is the "is" of predication, not the "is" of equality.



$0^0$ is indeterminate in the sense that if $f(x)$ and $g(x)$ both approach $0$ as $x\to a$ then $f(x)^{g(x)}$ could approach any positive number or $0$ or $\infty$ depending on which functions $f$ and $g$ are.



But in some contexts it is important that $0^0$ be taken to be $1$. One of those contexts is in the identity

$$
e^z = \sum_{n=0}^\infty \frac{z^n}{n!}.
$$
This fails to hold when $z=0$ unless $0^0=1$, since the first term of the series is then $\dfrac{0^0}{0!}$.



In combinatorial enumeration it is also important in some contexts that $0^0$ is $1$, for the same reason $2^0$ is $1$: if you multiply by something $0$ times, it's the same as multiplying by $1$. That is also one way of looking at the reason why $0!$ is $1$.


Modular arithmetic and linear congruences


Assuming a linear congruence:


$ax\equiv b \pmod m$


It's safe to say that one solution would be:


$x\equiv ba^{-1} \pmod m$


Now, the first condition i memorized for a number $a$ to have an inverse $mod(m)$ was:


$\gcd(a,m) = 1$


Which stems from the fact (and correct me here) that a linear congruence has solution if that gcd divides $b$. Since on an inverse calculation we have:


$ax\equiv 1 \pmod m$



The only number that divides one is one itself, so it makes sense.


Now comes the part that annoys me most:


"If the condition that tells us that there is an inverse $mod (m)$ for $a$ says that $\gcd(a,m)=1$, then how come linear congruences where the $\gcd(a,m) \ne 1$ have solution? Why do we say that a linear congruence where $\gcd(a,m) = d > 1$ has d solutions? If you can't invert $a$, then you can't do this:"


$ax\equiv b \pmod m \implies x\equiv ba^{-1} \pmod m $


Please help on this one. It's kinda tormenting me :(


Answer



The long and the short of it is: $ax \equiv b \pmod m$ has solutions iff $\gcd(a,m)$ divides $b$.


As you said, if $\gcd(a,m) = 1$, then we can multiply by the inverse of $a$ to get our (unique!) solution.


But if $\gcd(a,m) = d > 1$, we still have a chance at finding solutions, even though there is no inverse of $a$ mod $m$.



Assume $d \mid b$.



Then there are integers $a', m', b'$ such that $a = da'$, $b = db'$, and $m = dm'$. $$ax \equiv b \pmod m \iff a'x \equiv b' \pmod{m'} $$


But since $d$ was the GCD of $a$ and $m$, we know $\gcd(a', m') = 1$, and we can construct a solution mod $m'$. For notation's sake, let $c$ be an integer such that $ca' \equiv 1 \pmod {m'}$. $$a'x \equiv b' \pmod{m'} \iff x \equiv b' c \pmod {m'}$$


Now we can "lift" our solution mod $m'$ to many solutions mod $m$.: $$ x \equiv b' c \pmod {m'} \iff \exists k \quad x \equiv b'c + km' \pmod m $$



Say there is a solution to the congruence. Then $ax \equiv b \pmod m$ implies that for some $k$, $ax + km = b$. But if $d$ divides $a$ and $m$, it must also divide $b$.



So it is a necessary and sufficient condition.


sequences and series - Formula for partial sum of (10)/(10n+1)

I'm trying to find $S_n$ of an infinite series, and I'm having trouble. Here is the equation:



$$\sum_{n=1}^\infty \frac{10}{10n+1}$$




This gives me these terms:



$$S_n = \frac{10}{11}+\frac{10}{21}+\frac{10}{31}+\frac{10}{41}+ ... + \frac{10}{10n+1}$$



After I calculate the terms of $S_n$, I get:



$$S_n = \frac{10}{11},\frac{20}{32},\frac{30}{63},\frac{40}{104}, ... $$



Obviously the top is 10n, but I'm having trouble with the bottom. I recognize a pattern in the differences of the terms, mainly that each is separated by the previous difference + 10:




$$S_2-S_1=21$$



$$S_3-S_2=31$$



$$S_4-S_3=41$$



But I have no idea how to translate that into a formula. Note that I am aware that the series diverges, but I would still like to create a formula with which I can take the limit of infinity to verify that it diverges. Any suggestions?



EDIT: Apparently I'm asking the wrong question. What I'm essentially trying to figure out is how to determine whether the series converges or diverges based on the information available. I can use intuition to come to the conclusion it's divergent, but how do I do it mathematically?

Sunday, July 28, 2019

real analysis - Where does this sequence $sqrt{7}$,$sqrt{7+ sqrt{7}}$,$sqrt{7+sqrt{7+sqrt{7}}}$,.... converge?





The given sequence is $\sqrt{7}$,$\sqrt{7+ \sqrt{7}}$,$\sqrt{7+\sqrt{7+\sqrt{7}}}$,.....and so on.



the sequence is increasing so to converge must be bounded above.Now looks like they would not exceed 7. The given options are





  1. ${1+\sqrt{33}}\over{2}$


  2. ${1+\sqrt{32}}\over{2}$


  3. ${1+\sqrt{30}}\over{2}$


  4. ${1+\sqrt{29}}\over{2}$




How to proceed now.
Thanks for any help.


Answer



Trick: Let $X = \sqrt{ 7 + \sqrt{ 7 + ... } } $. We have $X = \sqrt{ 7 + X } $ and so $X^2 = 7 + X $. Now you solve the quadratic equation.



calculus - Identification of a curious function

During computation of some Shapley values (details below), I encountered the following function:
$$
f\left(\sum_{k \geq 0} 2^{-p_k}\right) = \sum_{k \geq 0} \frac{1}{(p_k+1)\binom{p_k}{k}},
$$
where $p_0 > 0$ and $p_{k+1} > p_k$ for all $k$. In other words, the input to $f$ is the binary expansion of a real number in the range $[0,1]$, and the $p_k$ correspond to the positions of $1$s in the binary expansion.



For example, $f(2^{-t}) = 1/(t+1)$, so $f(1/2) = 1/2$, $f(1/4) = 1/3$ and so on. More complicated examples are $f(5/8) = f(2^{-1} + 2^{-3}) = 1/2 + 1/(4\cdot 3) = 7/12$ and
$$ f(2/3) = f\left(\sum_{k \geq 0}2^{-(2k+1)}\right) = \sum_{k \geq 0} \frac{1}{(2k+2)\binom{2k+1}{k}} = \frac{\pi}{\sqrt{27}}.$$



The function $f$ is a continuous increasing function satisfying $f(0) = 0$, $f(1) = 1$, and $f(1-t) = 1-f(t)$ for $t \in [0,1]$. It has vertical asymptotes at dyadic points.




Here is a plot of $f$:plot of f




Is the function $f$ known?







Here is where $f$ came from. Let $n \geq 1$ be an integer and let $t \in [0,1]$. For a permutation $\pi$ of the numbers $\{ 2^{-m} : 0 \leq m \leq n-1 \}$ satisfying $\pi^{-1}(1) = i$, we say that $\pi$ is pivotal if $\sum_{j


For example, take $n = 4$. The permutation $1/8,1/2,1,1/4$ is pivotal for $t \in (5/8,1]$. For all $n \geq 2$ we have $f_n(1/2) = 1/2$, since $\pi$ is pivotal iff $1$ appears before $1/2$ in $\pi$. The general formula for $f$ is derived in a similar way.



We leave it to the reader to figure out how $f_n$ measures some Shapley value. The functions $f_n$ are step functions with steps of length $1/2^{n-1}$. They are left-continuous, and are equal to $f$ at the breakpoints.

modular arithmetic - How can I tell if a number in base 5 is divisible by 3?




I know of the sum of digits divisible by 3 method, but it seems to not be working for base 5.



How can I check if number in base 5 is divisible by 3 without converting it to base 10 (or 3, for that matter)?


Answer



Divisibility rules generally rely on the remainders of the weights of digits having a certain regularity. The standard method for divisibility by $3$ in the decimal system works because the weights of all digits have remainder $1$ modulo $3$. The same is true for $9$. For $11$, things are only slightly more complicated: Since odd digits have remainder $1$ and even digits have remainder $-1$, you need to take the alternating sum of digits to test for divisibility by $11$.



In base $5$, we have the same situation for $3$ as we have for $11$ in base $10$: The remainder of the weights of odd digits is $1$ and that of even digits is $-1$. Thus you can check for divisibility by $3$ by taking the alternating sum of the digits.



More generally, in base $b$ the sum of digits works for the divisors of $b-1$ and the alternating sum of digits works for the divisors of $b+1$.



algebra precalculus - An incorrect method to sum the first $n$ squares which nevertheless works



Start with the identity



$\sum_{i=1}^n i^3 = \left( \sum_{i = 1}^n i \right)^2 = \left(\frac{n(n+1)}{2}\right)^2$.




Differentiate the left-most term with respect to $i$ to get



$\frac{d}{di} \sum_{i=1}^n i^3 = 3 \sum_{i = 1}^n i^2$.



Differentiate the right-most term with respect to $n$ to get



$\frac{d}{dn} \left(\frac{n(n+1)}{2}\right)^2 = \frac{1}{2}n(n+1)(2n+1)$.



Equate the derivatives, obtaining




$\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$,



which is known to be correct.



Is there any neat reason why this method happens to get lucky and work for this case?


Answer



Let $f_k(n)=\sum_{i=1}^n i^k$. We all know that $f_k$ is actually
a polynomial of degree $k+1$. Also $f_k$ can be characterised by the two
conditions:
$$f_k(x)-f_k(x-1)=x^k$$

and
$$f_k(0)=0.$$
Differentiating the first condition gives
$$f_k'(x)-f_k'(x-1)=k x^{k-1}.$$
Therefore the polynomial $(1/k)f_k'$ satisfies the first of the two
conditions that $f_{k-1}$ does. But it may not satisfy the second. But then
$(1/k)(f_k'(x)-f_k'(0))$ does. So
$$f_{k-1}(x)=\frac{f_k'(x)-f_k'(0)}k.$$



The mysterious numbers $f_k'(0)$ are related to the Bernoulli numbers,

and when $k\ge3$ is odd they obligingly vanish...


calculus - Finding the limit of $frac{Q(n)}{P(n)}$ where $Q,P$ are polynomials



Suppose that $$Q(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots+a_{1}x+a_{0} $$and $$P(x)=b_{m}x^{m}+b_{m-1}x^{m-1}+\cdots+b_{1}x+b_{0}.$$ How do I find $$\lim_{x\rightarrow\infty}\frac{Q(x)}{P(x)}$$ and what does the sequence $$\frac{Q(k)}{P(k)}$$ converge to?




For example, how would I find what the sequence $$\frac{8k^2+2k-100}{3k^2+2k+1}$$ converges to? Or what is $$\lim_{x\rightarrow\infty}\frac{3x+5}{-2x+9}?$$



This is being asked in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.


Answer



Short Answer:



The sequence $\displaystyle\frac{Q(k)}{P(k)}$ will converge to the same limit as the function $\displaystyle\frac{Q(x)}{P(x)}.$ There are three cases:




$(i)$ If $n>m$ then it diverges to either $\infty$ or $-\infty$ depending on the sign of $\frac{a_{n}}{b_{m}}$.



$(ii)$ If $n

$(iii)$ If $n=m$ then it converges to $\frac{a_{n}}{b_{n}}$.


Saturday, July 27, 2019

summation - How does the sum of the series “$1 + 2 + 3 + 4 + 5 + 6ldots$” to infinity = “$-1/12$”?

(I was requested to edit the question to explain why it is different that a proposed duplicate question. This seems counterproductive to do here, inside the question it self, but that is what I have been asked by the site and moderators. There is no way for me to vote against their votes. So, here I go: Please stop voting this as a duplicate so quickly, which will eventually lead to this question being closed off. Yes, the other question linked to asks the same math, but any newcomer to the problem who was exposed to it via physics, as I was, will prefer this question instead of the one that is purely mathematically. I beg the moderators to not be pedantic on this one. This question spills into physics, which is why I did the cross post to the physics forum as well.)



How does the sum of the series “1 + 2 + 3 + 4 + 5 + 6…” to infinity = “-1/12”, in the context of physics?



I heard Lawrence Krauss say this once during a debate with Hamza Tzortzis (http://youtu.be/uSwJuOPG4FI). I found a transcript of another debate between Krauss and William Lane Craig which has the same sum. Here is the paragraph in full:





Let’s go to some of the things Dr. Craig talked about. In fact, the
existence of infinity, which he talked about which is
self-contradictory, is not self-contradictory at all. Mathematicians
know precisely how to deal with infinity; so do physicists. We rely on
infinities. In fact, there’s a field of mathematics called “Complex
Variables” which is the basis of much of modern physics, from
electro-magnetism to quantum mechanics and beyond, where in fact we
learn to deal with infinity; without the infinities we couldn’t do the

physics. We know how to sum infinite series because we can do complex
analysis. Mathematicians have taught us how. It’s strange and very
unappetizing, and in fact you can sum things that look ridiculous. For
example, if you sum the series, “1 + 2 + 3 + 4 + 5 + 6…” to infinity,
what’s the answer? “-1/12.” You don’t like it? Too bad! The
mathematics is consistent if we assign that. The world is the way it
is whether we like it or not.




-- Lawrence Krauss, debating William Lane Craig, March 30, 2011




Source: http://www.reasonablefaith.org/the-craig-krauss-debate-at-north-carolina-state-university



CROSS POST: I'm not sure if I should post this in mathematics or physics, so I posted it in both. Cross post: https://physics.stackexchange.com/questions/92739/how-does-the-sum-of-the-series-1-2-3-4-5-6-to-infinity-1-12



EDIT: I did not mean to begin a debate on why Krauss said this. I only wished to understand this interesting math. He was likely trying to showcase Craig's lack of understanding of mathematics or logic or physics or something. Whatever his purpose can be determined from the context of the full script that I linked to above. Anyone who is interested, please do. Please do not judge him out of context. Since I have watched one of these debates, I understand the context and do not hold the lack of a full breakdown as being ignorant. Keep in mind the debate I heard this in was different from the debate above.

induction - Prove that every natural number $n>15$ there exist natural numbers $x,ygeqslant1$ which solve the equation.


Prove that every natural number $n>15$ exist Natural numbers $x,y\geqslant1$ which solve the equation $3x+5y=n$.


so i try induction. base case is for $n=16$.



so $\gcd(5,3)=1$, after Euclidean algorithm i found:


$3(32-5t)+5(-16+3t)=16$ so i found that $32-5t>1$c for $x$ and $-16+3t>1$ for $y$ and for $t=6$ $x=2$ and $y=2$ .


now suppose its takes place for $n$.


how i show that for $n+1$?


if there is more elegant way i would love to see.


Answer



For $n=16$, it is easy: $16=2\times3+2\times5$.


Now, let $n\in\mathbb{N}\setminus\{1,2,\ldots,14\}$ and suppose that there are natural numbers $x$ and $y$ such that $n=3x+5y$. Then:


  • if $y>1$,\begin{align}n+1&=3x+5y+1\\&=3(x+2)+5(y-1)\end{align}and $x+2,y-1\in\mathbb{N}$.

  • Otherwise, $n+1=3x+5+1=3x+6$, for some natural number $x$ and so$$n+1=3(x-3)+5\times3.$$Note the $x-3\in\mathbb N$, since\begin{align}n+1\geqslant16&\iff3x+6\geqslant16\\&\iff3x\geqslant10\end{align}and therefore, since $x\in\mathbb N$, $x\geqslant 4$.


elementary number theory - Divisibility by 7 rule, and Congruence Arithmetic Laws

I have seen other criteria for divisibility by 7. Criterion described below present in the book Handbook of Mathematics for IN Bronshtein (p. 323) is interesting, but could not prove it. Let $n = (a_ka_{k-1}\ldots a_2a_1a_0)_{10} = \displaystyle{\sum_{j=0}^{k}}a_{k-j}10^{k-j}$. The expression $$ Q_{3}^{\prime}(n) = (a_2a_1a_0)_{10} - (a_5a_4a_3)_{10} + (a_8a_7a_6)_{10} -\ldots $$ are called alternating sum of the digits of third order of $n$. For example, $$ Q_{3}^{\prime}(123456789) = 789-456+123=456 $$ Proposition: $7 | n \ \Leftrightarrow \ 7 | Q_{3}^{\prime}(n)$.


proof. ??



Thanks for any help.

Friday, July 26, 2019

sequences and series - Calculating $1+frac13+frac{1cdot3}{3cdot6}+frac{1cdot3cdot5}{3cdot6cdot9}+frac{1cdot3cdot5cdot7}{3cdot6cdot9cdot12}+dots? $




How to find infinite sum How to find infinite sum $$1+\dfrac13+\dfrac{1\cdot3}{3\cdot6}+\dfrac{1\cdot3\cdot5}{3\cdot6\cdot9}+\dfrac{1\cdot3\cdot5\cdot7}{3\cdot6\cdot9\cdot12}+\dots? $$





I can see that 3 cancels out after 1/3, but what next? I can't go further.


Answer



As the denominator of the $n$th term $T_n$ is $\displaystyle3\cdot6\cdot9\cdot12\cdots(3n)=3^n \cdot n!$



(Setting the first term to be $T_0=1$)



and the numerator of $n$th term is $\displaystyle1\cdot3\cdot5\cdots(2n-1)$ which is a product of $n$th terms of an Arithmetic Series with common difference $=2,$




we can write
$\displaystyle1\cdot3\cdot5\cdots(2n-1)=-\frac12\cdot\left(-\frac12-1\right)\cdots\left(-\frac12-{n+1}\right)\cdot(-2^n)$



which suitably resembles the numerator of Generalized binomial coefficients



$$\implies T_n=\frac{-\frac12\cdot\left(-\frac12-1\right) \cdots\left(-\frac12-{n+1}\right)}{n!}\left(-\frac23\right)^n$$



So, here $\displaystyle z=-\frac23,\alpha=-\frac12$ in $\displaystyle(1+z)^\alpha$


sequences and series - Find the sum $1+cos (x)+cos (2x)+cos (3x)+....+cos (n-1)x$




By considering the geometric series $1+z+z^{2}+...+z^{n-1}$ where $z=\cos(\theta)+i\sin(\theta)$, show that $1+\cos(\theta)+\cos(2\theta)+\cos(3\theta)+...+\cos(n-1)\theta$ = ${1-\cos(\theta)+\cos(n-1)\theta-\cos(n\theta)}\over {2-2\cos(\theta)}$



I've tried expressing $\cos(n\theta)$ as ${e^{in\theta}+e^{-in\theta}} \over {2}$ but I don't think that will lead anywhere. Does it help that $1+z+z^{2}+z^{3}+...+z^{n-1}$=$e^{0i\theta}+e^{i\theta}+e^{2i\theta}+e^{3i\theta}+...+e^{(n-1)i\theta}$?



So the sum $\sum_{r=0}^{n-1} e^{ir\theta}$=${e^{ni\theta}-1} \over {e^{i\theta}-1}$



Thank you in advance :)


Answer




Your sum can be rewritten: $\Re(\sum \exp{(i n \theta)})$ which is simply a geometric sum. Then make apparent the real and imaginary parts in your result.


factorial - Limit of a function not using Stirling's Approximation

I want to compute the following limit:
$$\lim_{n\to\infty} \frac{\left(\frac{e}{F_{n+1}}\right)^{F_{n+1}} F_{n+1}!}{\left(\frac{e}{F_n}\right)^{F_n} F_n!},$$




where $F_n$ is the $n$th Fibonacci number. The limit is easily computed by using Stirling's approximation $n! \simeq \sqrt{2\pi n} \left(\frac{n}{e}\right)^n$:



$$\lim_{n\to\infty} \frac{\left(\frac{e}{F_{n+1}}\right)^{F_{n+1}} F_{n+1}!}{\left(\frac{e}{F_n}\right)^{F_n} F_n!} = \lim_{n\to\infty} \frac{\left(\frac{e}{F_{n+1}}\right)^{F_{n+1}} \sqrt{2\pi F_{n+1}} \left(\frac{F_{n+1}}{e}\right)^{F_{n+1}}}{\left(\frac{e}{F_n}\right)^{F_n} \sqrt{2\pi F_n} \left(\frac{F_n}{e}\right)^{F_n}}\\=\lim_{n\to\infty}\sqrt{\frac{F_{n+1}}{F_n}}\\=\sqrt{\frac{1+\sqrt{5}}{2}}.$$



Is it possible to show this without using Stirling's approximation?

Limit of trigonometric function $lim_{xtopi/3} frac{1 - 2cos(x)}{sin(x - frac{pi}{3})}$




I want to compute this limit:



$\displaystyle \lim_{x\to\pi/3} \frac{1 - 2\cos(x)}{\sin(x - \frac{\pi}{3})}$



Using L'Hopital, is easy to get the result, which is $\sqrt{3}$



I tried using linear approximation (making $u = x - \displaystyle \frac{\pi}{3}$)



$\displaystyle \lim_{u\to 0} \frac{1 - 2\cos(u + \frac{\pi}{3})}{\sin(u)} = \lim_{u\to 0} \frac{1 - \cos(u) + \sqrt{3}\sin(u)}{u} \approx \lim_{u\to 0} \frac{1 - 1 + \sqrt{3}u}{u} = \sqrt{3}$




But it bothers me using that sort of linear approximation, I want to get the result in a more formal way. I have tried using double angle properties



$$\cos(2x) = \cos^2(x) - \sin^2(x)$$
$$\sin(2x) = 2\sin(x)\cos(x)$$



But I reach to a point of nowhere, I cannot come up with a way of simplifying expressions to get the results:



$\displaystyle\lim_{x\to\pi/3} 2\frac{3\sin^2(\frac{x}{2}) - \cos^2(\frac{x}{2})}{\sqrt{3}\sin(x) - \cos(x)}$



Is there a way of computing this limit without approximations and without L'Hopital?



Answer



HINT:



Use $1-\cos(u)=2\sin^2(u/2)$ along with



$$\lim_{u\to 0}\frac{\sin(u)}{u}=1$$



and



$$\lim_{u\to 0}\frac{\sin^2(u/2)}{u}=0$$





Note that in the OP, $1-2\cos(x)=1-\cos(u)\color{blue}{+}\sqrt 3 \sin(u)$



linear algebra - Proof that $(AA^{-1}=I) Rightarrow (AA^{-1} = A^{-1}A)$




I'm trying to prove a pretty simple problem - commutativity of multiplication of matrix and its inverse.



But I'm not sure, if my proof is correct, because I'm not very experienced. Could you, please, take a look at it?






My proof:





  • We know, that $AA^{-1}=I$, where $I$ is an identity matrix and $A^{-1}$ is an inverse matrix.

  • I want to prove, that it implies $AA^{-1}=A^{-1}A$



\begin{align}
AA^{-1}&=I\\
AA^{-1}A&=IA\\
AX&=IA \tag{$X=A^{-1}A$}\\

AX&=A
\end{align}
At this point we can see, that $X$ must be a multiplicative identity for matrix $A \Rightarrow X$ must be an identity matrix $I$.



\begin{align}
X = A^{-1}A &= I\\
\underline{\underline{AA^{-1} = I = A^{-1}A}}
\end{align}


Answer



You claim is not quite true. Consider the example

\begin{align}
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
0 & 1\\
0 & 0
\end{pmatrix} =

\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}.
\end{align}
Suppose $A, B$ are square matrices such that $AB = I$. Observe
\begin{align}
BA= BIA= BA BA = (BA)^2 \ \ \Rightarrow \ \ BA(I-BA) = 0.
\end{align}
Moreover, using the fact that $AB$ is invertible implies $A$ and $B$ are invertible (which is true only in finite dimensional vector spaces), then it follows

\begin{align}
I-BA=0.
\end{align}



Note: we have used the fact that $A, B$ are square matrices when we insert $I$ between $BA$.


Thursday, July 25, 2019

calculus - Evaluating the integral $int_0^infty frac{sin x} x ,mathrm dx = frac pi 2$?




A famous exercise which one encounters while doing Complex Analysis (Residue theory) is to prove that the given integral:
$$\displaystyle\int_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$$



Well, can anyone prove this without using Residue theory. I actually thought of doing this:
$$\int_0^\infty \frac{\sin x} x \, dx = \lim_{t \to \infty} \int_0^t \frac{1}{t} \left( t - \frac{t^3}{3!} + \frac{t^5}{5!} + \cdots \right) \,\mathrm dt$$
but I don't see how $\pi$ comes here, since we need the answer to be equal to $\dfrac{\pi}{2}$.


Answer



Here's another way of finishing off Derek's argument. He proves
$$\int_0^{\pi/2}\frac{\sin(2n+1)x}{\sin x}dx=\frac\pi2.$$

Let
$$I_n=\int_0^{\pi/2}\frac{\sin(2n+1)x}{x}dx=
\int_0^{(2n+1)\pi/2}\frac{\sin x}{x}dx.$$
Let
$$D_n=\frac\pi2-I_n=\int_0^{\pi/2}f(x)\sin(2n+1)x\ dx$$
where
$$f(x)=\frac1{\sin x}-\frac1x.$$
We need the fact that if we define $f(0)=0$ then $f$ has a continuous
derivative on the interval $[0,\pi/2]$. Integration by parts yields
$$D_n=\frac1{2n+1}\int_0^{\pi/2}f'(x)\cos(2n+1)x\ dx=O(1/n).$$

Hence $I_n\to\pi/2$ and we conclude that
$$\int_0^\infty\frac{\sin x}{x}dx=\lim_{n\to\infty}I_n=\frac\pi2.$$


algebra precalculus - Spivak's Calculus (Chapter 2, Exercise 17)



I am having trouble completing exercise 17 of chapter 2 of Spivak's Calculus. In this exercise, the reader is asked to prove that for all natural numbers $n$ and $p$, there exist real numbers $\alpha_1,\ldots,\alpha_p$ such that:
$$ \sum_{k = 1}^n k^p ~=~ \frac{n^{p + 1}}{p + 1} + \sum_{i = 1}^p \alpha_in^i$$
The attempted proof of the identity that I devised uses the binomial theorem with $p$ and $n$ fixed, but does not use complete induction. The proof of the identity provided by Spivak uses the binomial theorem with $n$ fixed, but uses complete induction on $p$ to complete the proof. What follows is my attempted proof.



In this proof, I apply $(x + 1)^{y + 1}$ and $(n + 1)^{p + 1}$ to the binomial theorem in order to derive the first and second terms of the sum on the left-hand side of the identity to be proved. First, note that applying $(x + 1)^{y + 1}$ to the binomial theorem, where $x$ is a real number and $y$ a natural number, demonstrates that the predicate $\phi(x,y)$ is true for all real numbers $x$ and natural numbers $y$:
$$ \phi(x,y) ~\equiv~ (x + 1)^{y + 1} - x^{y + 1} ~=~ (y + 1)x^y + \sum_{i = 0}^{y - 1} {y + 1 \choose i} x^i$$
Now, suppose that $n$ and $p$ are natural numbers. Then the sum of the left-hand sides of the identities $\phi(k,p)$ and the sum of the right-hand sides of the identities $\phi(k,p)$, for $k = 1,\ldots,n$, are equal:

$$(n + 1)^{p + 1} - 1 ~=~ \sum_{k = 1}^n (p + 1)k^p + \sum_{k = 1}^n \left ( \sum_{i = 0}^{p - 1} {p + 1 \choose i} k^i \right )$$
Rewriting $(n + 1)^{p + 1}$ using the identity $\phi(n,p)$ and dividing both sides of the identity by $p + 1$ gives
$$ \sum_{i = 1}^n k^p ~=~ \frac{n^{p + 1}}{p + 1} + \sum_{i = 0}^p {p + 1 \choose i} \left ( \frac{1}{p + 1}\right ) n^i - \left [\frac{1}{p + 1} + \sum_{k = 1}^n \left ( \sum_{i = 0}^{p - 1} {p + 1 \choose i} \frac{k^i}{p + 1} \right ) \right ] $$
Assuming that I didn't make any algebraic errors, I have shown that there are $\alpha_1,\ldots,\alpha_p$ such that the sum of the $p$th powers of the first $n$ natural numbers can be written in the required way. On the other hand, Spivak derives an identity containing the sums of the $r$th powers of the first $n$ natural numbers, for $r \leq p$, then applies an induction hypothesis to obtain the case for $p + 1$. I'm not sure if my proof is wrong, and whether I am missing something that would require a proof by induction. Is there an error?



For reference: I wrote functions that compute the sum of the $p$h power of the first $n$ numbers, and a version of the last identity derived before expanding $(n + 1)^{p + 1}$ using $\phi(n,p)$ in Haskell. Both computed the same numbers for all values of $n \leq 100$ and $p \leq 10$, so for a few numbers there seems to be no problem. Of course, there are more numbers greater than the number of cases that I tested.


Answer



He tells you to use induction. In my copy, he just says:





Use the methods of problem $6$ to prove that $$\sum_{k=0}^n k^p$$ may be written in the form



$$\frac{n^{p+1}}{p+1}+An^{p}+Bn^{p-1}+Cn^p+\dots$$




The really important thing here is



$$\frac{n^{p+1}}{p+1}$$



and later on you'll find out why.




Basically, he hints on using the "recursive" trick to obtaining



$$S_{p+1}$$ from $S_p,\dots,S_1$



where $$S_p=\sum_{k=0}^n k^p$$



Say we want $$S_2=\sum_{k=0}^n k^2$$



Then we note that $$(k+1)^3=3k^2+3k+1$$




So that



$$(k+1)^3-k^3=3k^2+3k+1$$



Now we sum $k=1,\dots,n$.



$$\sum_{k=1}^n(k+1)^3-k^3=3\sum_{k=1}^nk^2+3\sum_{k=1}^nk+\sum_{k=1}^n1$$



$$(n+1)^3-1^3=3\sum_{k=1}^nk^2+3\frac{n(n+1)}{2}+n$$




$$\frac{{{{(n + 1)}^3}}}{3} - \frac{{n + 1}}{2} - \frac{{n(n + 1)}}{2} = \sum\limits_{k = 1}^n {{k^2}} $$



(Expansion aside).



So the idea is that you assume the result true for $p=1,\dots,l$ and prove it for $l+1$. Note you shouldn't be too explicit about the coefficients, and the induction is to be done on $p$, not on $n$.


integration - Laplace transform of $frac{1}{t}$ and resulting improper integral



Could someone please explain how
$$\int_0^{\infty}\frac{e^{-x}}{x}dx$$ diverges?
This is because the Laplace transform of $\frac{1}{t}$ can be reduced to this integral which has to diverge. But the limit comparison test with $e^{-x}$ shows that the integral converges.



Please help.




Thanks in advance.


Answer



Regarding the second part of the question, to add to carmichael561's answer,



To use the limit comparison test with $e^{-x}$, we need
$$\frac{e^{-x}}{x}\le e^{-x}$$
throughout the interval over which integral takes place.



However, this is true only for $x\gt1$, and hence the test cannot be used.


exponential function - Algebraic expression to the algebraic expressionth power equation



How can you solve problems like $ x^{x-1}=7 $? More generally, how can you solve equations like $(ax+b)^{cx+d}=e$ , where $a,b,c,d,e$ are given?$($Give all the roots, including complex ones$)$


Answer



Interesting enough for me to post an answer is the fact that the problem for your first equation only lies in the fact that the exponent is given by $x-1$ and not by $x$. I will demnonstrate how to solve this equation for the latter case. First of all rewrite the $x^x$ term in terms of the exponential and do not forget assuming a complex valued logarithm to get




$$\begin{align*}
x^x&=7\\
e^{x\log(x)+2\pi i n}&=7\\
x\log(x)+2\pi i n&=\log(7)\\
\log(x)e^{\log(x)}&=\log(7)-2\pi i n\\
\log(x)&=W(\log(7)-2\pi i n)
\end{align*}$$





$$\therefore~x=e^{W(\log(7)-2\pi i n)}~~~n\in\mathbb Z$$




I have doubts that on can deduce a general formula for arbitrary $a,b,c,d,e$ $($just take $a=c=1$,$b=0$,$d=-1$ and $e=7$ to reproduce your first equation$)$. Anyway considering that $a=c$ and $b=d$ it is indeed possible since this is basically the same as $x^x$ and can be solved using the Lamber W-Function but note that you have to consider the different branches of this function with regard to the values of $a,b$ and $e$.



From hereon I have to admit that I have not enough experience with the Lambert W-Function to give an detailed outline of the different branches and why they are important.


Wednesday, July 24, 2019

calculus - Clarify and justify how get the derivative of the Laplace transform of the Buchstab function



I would like to justify that the derivative with respect to $s$ of the Laplace transform of the Buchstab function is
$$\int_1^\infty u\omega(u)e^{-su}du=\frac{e^{-s}}{s}\exp\left(\int_0^\infty \frac{e^{-t}}{t}dt\right)$$




for $s>0$.




Question. Was my deduction right? Please can you justify it rigorously? Thanks in advance.




You can take the definiton of Buchstab function, and its Laplace transform from Tao What's new? I say Exercise 28 of 254A, Supplement 4: Probabilistic models and heuristics for primes, and take required theorems from Wikipedia Differentiation under the integral sign. I¡ve computed the derivative



$$\frac{d}{ds}\int_0^\infty \frac{e^{-t}}{t}dt$$

with Wolfram Alpha because I had doubts of how work with the upper limit, since there is infinite (also with calculus, I say the rule chain, and LHS by other theorem). I need these calculations since I would like to study more calculations for this function, like to try the integration by parts (notice that it is possible a simplification if one uses the related differential equation). Then can you help to justify the right statement, I say both sides of previous derivative of the Laplace transform of this special function with respect $s$. Also if you want clarify, if in LHS the interval of integration is $\int_0^\infty$ or $\int_1^\infty$


Answer



The Laplace transform that you are looking for is given, of course, by the formula



$$\mathcal L (\omega) (s) = \int \limits _0 ^\infty \omega (u) \ \Bbb e ^{-su} \ \Bbb d u .$$



The problem is that the definition of Buchstab's function (the series in Tao's Ex. 28.i) is almost useless for computations. Fortunately, there comes Ex. 28.iii which gives the following formula:



$$u \omega (u) = 1_{(1,\infty)} (u) + \int \limits _0 ^u 1_{(1,\infty)} (t) \ \omega (u-t) \ \Bbb d t .$$




We have, therefore, to produce an $u \omega (u)$ inside Laplace's transform, and the obvious way to to this is to derive with respect to $s$:



$$\mathcal L (\omega) ' (s) = \int \limits _0 ^\infty -u \omega (u) \ \Bbb e ^{-su} \ \Bbb d u = - \int \limits _0 ^\infty \left( 1_{(1,\infty)} (u) + \int \limits _0 ^u 1_{(1,\infty)} (t) \ \omega (u-t) \ \Bbb d t \right) \ \Bbb e ^{-su} \ \Bbb d u = \\
- \int \limits _1 ^\infty \Bbb e ^{-su} \ \Bbb d u - \int \limits _0 ^\infty \left( \int \limits _0 ^u 1_{(1,\infty)} (t) \ \omega (u-t) \ \Bbb d t \right) \ \Bbb e ^{-su} \ \Bbb d u = \\
- \frac {\Bbb e ^{-s}} s - \int \limits _1 ^\infty \left( \int \limits _1 ^u \omega (u-t) \ \Bbb d t \right) \ \Bbb e ^{-su} \ \Bbb d u = - \frac {\Bbb e ^{-s}} s - \int \limits _1 ^\infty \left( \int \limits _t ^\infty \omega (u-t) \ \Bbb d u \right) \ \Bbb e ^{-su} \ \Bbb d t = \\
- \frac {\Bbb e ^{-s}} s - \int \limits _1 ^\infty \left( \int \limits _0 ^\infty \omega (v) \ \Bbb d v \right) \ \Bbb e ^{-s(t+v)} \ \Bbb d t = - \frac {\Bbb e ^{-s}} s - \int \limits _1 ^\infty \Bbb e ^{-st} \ \Bbb d t \ \mathcal L (\omega) (s) = \\
- \frac {\Bbb e ^{-s}} s \big( \mathcal L (\omega) (s) + 1 \big) .$$



This is an elementary differential equation which can be rewritten as




$$\frac {\mathcal L (\omega) ' (s)} {\mathcal L (\omega) (s) + 1} = - \frac {\Bbb e ^{-s}} s ,$$



whence it follows that, for arbitrary but fixed $\sigma > 0$,



$$\ln \big( \mathcal L (\omega) (s) + 1 \big) - \ln \big( \mathcal L (\omega) (\sigma) + 1 \big) = - \int \limits _\sigma ^s \frac {\Bbb e ^{-t}} t \ \Bbb d t $$



so finally, after exponentiating and rearranging a bit,



$$\mathcal L (\omega) (s) = \exp \left( - \int \limits _\sigma ^s \frac {\Bbb e ^{-t}} t \ \Bbb d t \right) \big( \mathcal L (\omega) (\sigma) + 1 \big) - 1 .$$




It is now easy to derive with respect to $s$ and get



$$\mathcal L (\omega) ' (s) = - \frac {\Bbb e ^{-s}} s \ \exp \left( - \int \limits _\sigma ^s \frac {\Bbb e ^{-t}} t \ \Bbb d t \right) \big( \mathcal L (\omega) (\sigma) + 1 \big) .$$



Your own result is very close to being correct, having the following errors:




  • the minus sign missing from the fraction (both inside and outside the exponential)


  • the endpoints of your integral in the RHS are $0$ and $\infty$; $0$ clearly cannot be, because $\frac 1 s$ is not integrable close to $s=0$ (the numerator $\Bbb e ^{-s}$ not raising any problem)


  • the factor $\mathcal L (\omega) (\sigma) + 1$ has to be there, playing the role of the initial condition for that differential equation; unfortunately, with $\omega$ being so ugly, there is no obvious $\sigma$ for which $\mathcal L (\omega) (\sigma)$ could have a simple value, therefore we have to leave it there.




induction - Proving that two summations are equivalent: $sum_{i=1}^n i^3 = (sum_{i=1}^n i)^2$





Give a constructive proof to show that for all $n \geq 1$ ,



$\sum\limits_{i=1}^n i^3 = (\sum\limits_{i=1}^n i)^2$



Observe that $(n+1)^4 - n^4 = 4n^3 + 6n^2 + 4n + 1$ .







Now, the two following equalities are obvious:



$\sum\limits_{i=1}^n i^3 = 1^3 + 2^3 + 3^3 + ... + n^3$



$(\sum\limits_{i=1}^n i)^2 = (1 + 2 + 3 + ... + n)^2$



And they are both obviously equivalent given the first few test cases:




$\sum\limits_{i=1}^n i^3 = A(n)$




  • $A(1) = 1^3 = 1$

  • $A(2) = 1^3 + 2^3 = 1 + 8 = 9$

  • $A(3) = 1^3 + 2^3 + 3^3 = 9 + 27 = 36$



$(\sum\limits_{i=1}^n i)^2 = B(n)$





  • $B(1) = (1)^2 = 1$

  • $B(2) = (1 + 2)^2 =9 $

  • $B(3) = (1 + 2 + 3)^2 = 36$



Now, I am thinking of finding the closed-forms for both functions in the hopes that they are indeed the same. Then I would prove those closed forms to work by induction. But:





  1. I don't know if that would be a sound way to do it.

  2. I don't know if this would even qualify as constructive, as the question requests.



As you may tell, I am no math major. I am a Computer Science major, though. This is a computing fundamentals class. I took discrete 1.5 years ago, so my knowledge is about as fresh as a litter box. I've been in quite a rut for a few hours over this.


Answer



Your goal is to prove the statement $S(n)$ for all $n\geq 1$ where
$$
S(n) : 1^3 + 2^3 +3^3 +\cdots + n^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$

Using $\Sigma$-notation, we may rewrite $S(n)$ as follows:
$$
S(n) : \sum_{r=1}^n r^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$
Base step: The statement $S(1)$ says that $(1)^3 = (1)^2$ which is true because $1=1$.



Inductive step [$S(k)\to S(k+1)$]: Fix some $k\geq 1$, where $k\in\mathbb{N}$. Assume that
$$
S(k) : \sum_{r=1}^k r^3 = \left[\frac{k(k+1)}{2}\right]^2
$$

holds. To be proved is that
$$
S(k+1) : \sum_{r=1}^{k+1} r^3 = \left[\frac{(k+1)((k+1)+1)}{2}\right]^2
$$
follows. Beginning with the left side of $S(k+1)$,
\begin{align}
\sum_{r=1}^{k+1}r^3 &= \sum_{r=1}^k r^3 + (k+1)^3\tag{evaluate sum for $i=k+1$}\\[1em]
&= \left[\frac{k(k+1)}{2}\right]^2+(k+1)^3\tag{by $S(k)$}\\[1em]
&= \frac{(k+1)^2}{4}[k^2+4(k+1)]\tag{factor out $\frac{(k+1)^2}{4}$}\\[1em]
&= \frac{(k+1)^2}{4}[(k+2)(k+2)]\tag{factor quadratic}\\[1em]

&= \frac{(k+1)^2(k+2)^2}{4}\tag{multiply and rearrange}\\[1em]
&= \left[\frac{(k+1)(k+2)}{2}\right]^2\tag{rearrange}\\[1em]
&= \left[\frac{(k+1)((k+1)+1)}{2}\right]^2,\tag{rearrange}
\end{align}
one arrives at the right side of $S(k+1)$, thereby showing that $S(k+1)$ is also true, completing the inductive step.



By mathematical induction, it is proved that for all $n\geq 1$, where $n\in\mathbb{N}$, that the statement $S(n)$ is true.



Note: The step where $\dfrac{(k+1)^2}{4}$ is factored out is an important one. If we do not factor this out and, instead, choose to expand $(k+1)^3$, the problem becomes much more messy than it needs to be.


real analysis - Convergence from $L^p$ to $L^infty$




If $f$ is a function such that $f \in L^\infty \cap L^ {p_0}$ where $L^\infty$ is the space of essentially bounded functions and $ 0 < p_0 < \infty$. Show that $ || f|| _{L^p} \to ||f || _{L^\infty} $ as $ p \to \infty$. Where $|| f||_{L^\infty} $ is the least $M \in R$ such that $|f(x)| \le M$ for almost every $x \in X$.



The hint says to use the monotone convergence theorem, but i can't even see any pointwise convergence of functions.
Any help is appreciated.


Answer



Hint: Let $M\lt\|f\|_{L^\infty}$ and consider
$$
\int_{E_M}\left|\frac{f(x)}{M}\right|^p\,\mathrm{d}x

$$
where $E_M=\{x:|f(x)|\gt M\}$. I believe the Monotone Convergence Theorem works here.



Further Hint: $M\lt\|f\|_{L^\infty}$ implies $E_M$ has positive measure. On $E_M$, $\left|\frac{f(x)}{M}\right|^p$ tends to $\infty$ pointwise. MCT says that for some $p$, the integral above exceeds $1$.


calculus - Questions about U - Substitution and Integration

I have two questions:




I just learned about U-Substitution in class, and while I'm able to apply it, I'm a bit confused on some of the theory behind it.



The thing that most confuses me is that if we take the indefinite integral of f '(x) with respect to "x", that means the term inside the integral is actually the multiplication of f '(x) and dx. I always thought that dx term was just there to show that the derivative was taken w.r.t. "x", I didn't know it was being multiplied by f '(x). Why is this the case?



I'm also just very confused on the integral in general. It seems to me that the integral of something can mean many different things (antiderivative, area under a curve). How are definite and indefinite integrals related and where do they come from?



edit: Please look to my comments on KM101's answer to better understand what I'm asking.

real analysis - Can we construct a function $f:mathbb{R} rightarrow mathbb{R}$ such that it has intermediate value property and discontinuous everywhere?



Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere?




I think it is probable because we can consider
$$ y =
\begin{cases}
\sin \left( \frac{1}{x} \right), & \text{if } x \neq 0, \\
0, & \text{if } x=0.
\end{cases}
$$

This function has intermediate value property but is discontinuous on $x=0$.



Inspired by this example, let $r_n$ denote the rational number,and define
$$ y =
\begin{cases}
\sum_{n=1}^{\infty} \frac{1}{2^n} \left| \sin \left( \frac{1}{x-r_n} \right) \right|, & \text{if } x \notin \mathbb{Q}, \\
0, & \mbox{if }x \in \mathbb{Q}.
\end{cases}
$$




It is easy to see this function is discontinuons if $x$ is not a rational number. But I can't verify its intermediate value property.

Tuesday, July 23, 2019

sequences and series - Convergence of $sum_{n=1}^infty frac{sin^2(n)}{n}$




Does the series $$ \sum_{n=1}^\infty \frac{\sin^2(n)}{n} $$
converge?




I've tried to apply some tests, and I don't know how to bound the general term, so I must have missed something. Thanks in advance.


Answer




Hint:



Each interval of the form $\bigl[k\pi+{\pi\over6}, (k+1)\pi-{\pi\over6}\bigr)$ contains an integer $n_k$. We then have, for each $k$, that ${\sin^2(n_k)\over n_k}\ge {(1/2)^2\over (k+1)\pi}$. Now use a comparison test to show your series diverges.


algebra precalculus - Prove by induction $sum_{i=1}^ni^3=frac{n^2(n+1)^2}{4}$ for $nge1$

Prove the following statement $S(n)$ for $n\ge1$:



$$\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$$




To prove the basis, I substitute $1$ for $n$ in $S(n)$:



$$\sum_{i=1}^11^3=1=\frac{1^2(2)^2}{4}$$



Great. For the inductive step, I assume $S(n)$ to be true and prove $S(n+1)$:



$$\sum_{i=1}^{n+1}i^3=\frac{(n+1)^2(n+2)^2}{4}$$



Considering the sum on the left side:




$$\sum_{i=1}^{n+1}i^3=\sum_{i=1}^ni^3+(n+1)^3$$



I make use of $S(n)$ by substituting its right side for $\sum_{i=1}^ni^3$:



$$\sum_{i=1}^{n+1}i^3=\frac{n^2(n+1)^2}{4}+(n+1)^3$$



This is where I get a little lost. I think I expand the equation to be



$$=\frac{(n^4+2n^3+n^2)}{4}+(n+1)^3$$




but I'm not totally confident about that. Can anyone provide some guidance?

How to find basic algebra conversion formula?



I'm doing some programming for a video game in which the player characters and monster enemies can move at different speeds on 2-dimensional, cell-based tile map. Examples of these speeds are as follows:



1600 ms = 0.625 tiles/s
1000 ms = 1 tile/s
800 ms = 1.25 tiles/s
400 ms = 2.5 tiles/s

100 ms = 10 tiles/s
25 ms = 40 tiles/s


The same mathematical formula is used to convert every one of the above equations (feed the formula the first value, out pops the second). Problem is my math skills are terrible and I can't figure out how to determine what that formula actually is. Help!?


Answer



Second value = 1000/first value.


linear algebra - Determinant of a special $ntimes n$ matrix


Compute the determinant of the nun matrix: $$ \begin{pmatrix} 2 & 1 & \ldots & 1 \\ 1 & 2 & \ldots & 1\\ \vdots & \vdots & \ddots & \vdots\\ 1 & 1 &\ldots & 2 \end{pmatrix} $$


For $n=2$, I have$$ \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} $$


Then $det = 3$.


For $n=3$, we have $$ \begin{pmatrix} 2 & 1 & 1\\ 1 & 2 & 1\\ 1 & 1 & 2 \\ \end{pmatrix} $$


Then $det = 4$.


For $n=4$ again we have


$$ \begin{pmatrix} 2 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1\\ 1 & 1 & 2 & 1\\ 1 & 1 & 1 & 2 \end{pmatrix} $$ Then $det = 5$


How can I prove that the determinant of nun matrix is $n+1$.



Answer



A standard result (http://en.wikipedia.org/wiki/Matrix_determinant_lemma) is $\det(I+AB) = \det(I+BA)$.


Since the matrix above can be written as $I+ e e^T$, where $e$ is a vector of ones, we have $\det(I+ e e^T) = \det(1+ e^T e) = 1+e^Te = n+1$.


Monday, July 22, 2019

algebra precalculus - How to know that $a^3+b^3 = (a+b)(a^2-ab+b^2)$



Is there a way of go from $a^3+b^3$ to $(a+b)(a^2-ab+b^2)$ other than know the property by heart?


Answer



If you want to know if $a^n + b^n$ is divisible by $a+b$ (or by $a-b$, perhaps), you can always try long division, whether explicitly or in your head. I can't figure out a way to do the LaTeX or ASCII art for it here to do it explicitly, so I'll show you how one would do it "in one's head".



For example, for $a^3+b^3$, to divide $a^3+b^3$ by $a+b$, start by writing $a^3+b^3= (a+b)(\cdots)$. Then: we need to multiply $a$ by $a^2$ to get the $a^3$, so we will get $a^3+b^3=(a+b)(a^2+\cdots)$. The $a^2$ makes an unwanted $a^2b$ when multiplied by $b$, so we need ot get rid of it; how? We multiply $a$ by $-ab$. So now we have
$$a^3+b^3 = (a+b)(a^2-ab+\cdots).$$
But now you have an unwanted $-ab^2$ you get when you multiply $b$ by $-ab$; to get rid of that $-ab^2$, you have to "create" an $ab^2$. How? We multiply $a$ by $b^2$. So now we have:

$$a^3 + b^3 = (a+b)(a^2-ab+b^2+\cdots)$$
and then we notice that it comes out exactly, since we do want the $b^3$ that wee get when we multiply $b^2$ by $b$; so
$$a^3 + b^3 = (a+b)(a^2-ab+b^2).$$



If the expression you want is not divisible by what you are trying, you'll run into problems which require a "remainder". For instance, if you tried to do the same thing with $a^4+b^4$, you would start with $a^4+b^4 = (a+b)(a^3+\cdots)$; then to get rid of the extra $a^3b$, we subtract $a^2b$: $a^4+b^4 = (a+b)(a^3 - a^2b+\cdots)$. Now, to get rid of the unwanted $-a^2b^2$, we add $ab^2$, to get $a^4+b^4 = (a+b)(a^3-a^2b+ab^2+\cdots)$. Now, to get rid of the unwanted $ab^3$, we subtract $b^3$, to get
$$a^4+b^4 = (a+b)(a^3-a^2b+ab^2 - b^3+\cdots).$$
At this point we notice that we get an unwanted $-b^4$ (unwanted because we want $+b^4$, not $-b^4$). There is no way to get rid of it with the $a$, so this will be a "remainder". We need to cancel it out (by adding $b^4$) and then add what is still missing (another $b^4$), so
$$a^4 + b^4 = (a+b)(a^3-a^2b+ab^2 -b^3) + 2b^4.$$
(Which, by the way, tells you that $a^4-b^4 = (a+b)(a^3-a^2b+ab^2-b^3)$, by moving the $2b^4$ to the left hand side).


Sunday, July 21, 2019

Probability of rolling all $6$ die faces



I've been struggling with this for over an hour now and I still have no good results, the question is as follows:





What's the probability of getting all the numbers from $1$ to $6$ by rolling $10$ dice simultaneously?




Can you give any hints or solutions? This problem seems really simple but I feel like I'm blind to the solution.


Answer



The way I see this problem, I'd consider two finite sets, namely the set comprised by the $10$ dice (denoted by $\Theta$) and the set of all the possible outcomes (denoted by $\Omega$), in this particular situation, the six faces of the dice.



Therefore, the apparent ambiguity of the problem is significantly reduced by considering all possible mappings of the form $f:\Theta \mapsto \Omega$ that are surjective. Why exactly surjection ?



Recall that the definition of surjection implies that all values in the codomain must be hit at least once through the mapping of elements in the domain of the function. Under the circumstance, it's exactly what we're interested in, since we want to exclusively count all instances in which all the possible $6$ faces appear .




The number of surjective mappings can be found using the following identity :



$$m^n - \binom {m} {1}(m-1)^n +\binom {m} {2}(m-2)^n-\binom {m} {3}(m-3)^n + \cdots $$



(where $m$ denotes the number of elements in $\Omega$ and $n$ the number of elements in $\Theta$ ).



Let $A$ represent the happy event, given the facts I've presented you should be able to get: $$Pr(A)=0.27$$


Saturday, July 20, 2019

Limit of a sequence involving root of a factorial: $lim_{n to infty} frac{n}{ sqrt [n]{n!}}$




I need to check if
$$\lim_{n \to \infty} \frac{n}{ \sqrt [n]{n!}}$$ converges or not. Additionally, I wanted to show that the sequence is monotonically increasing in n and so limit exists. Any help is appreciated. I had tried taking log and manipulating the sequence but I could not prove monotonicity this way.



Answer



Use Stirling's approximation:
$ n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n $
and you'll get
$$
\lim_{n \rightarrow \infty} \frac{n}{(n!)^{1/n}}
=\lim_{n \rightarrow \infty} \frac{n}{(\sqrt{2 \pi n} \left(\frac{n}{e}\right)^n)^{1/n}}
=\lim_{n \rightarrow \infty} \frac{n}{({2 \pi n})^{1/2n} \left(\frac{n}{e}\right)}
=\lim_{n \rightarrow \infty} \frac{e}{({2 \pi n})^{1/2n} }=e,
$$

because $\lim_{n\to \infty} ({2 \pi n})^{1/2n}= \lim_{n\to \infty} n^{1/n}=1$.


elementary set theory - Prove that a set cannot have two different size $𝑚$ and $𝑛, 𝑚≠n$.

We define the number of elements in set by bijections as follows:



$|X| = n$ means that there exists a bijection from X to the set $\{1,2 \dots, n\}$.



I already showed that:


  • if $X$ and $Y$ have same size, then there exists a bijection from $X$ to $Y$

  • if $X$ has size $n$, and there exist a bijection from $X$ to $Y$, then $Y$ has size $n$ too.


Now, I want to prove following:



Prove that a set cannot have two different size $m$ and $n$, $m \neq$n.


Be careful not use the intuitive notion of "size" but only the definition via bijections. Procced by induction.



In the book is the solution (without base case), but I am not sure. So, I write them down and write my explenation. I write number to each step, if you do not agree then please write why.



PROOF: It suffices to prove that there is no bijection from the set $\{1,2, \dots, n\}$ onto a proper subset $A \subset \{1,2, \dots, n\}$.



(1) To reason why it is possible is that, by definition if there is no bijection between two sets, then the size of these sets cannot be same.




Proceed by induction on $n$. The case $n=1$ is clear.



(2) There is only one proper subset of $\{1\}$, nameley $\emptyset$. So we need show there is no bijection beetween $\{1\}$ to $\emptyset$. So let $f: \{1\} \rightarrow \emptyset$. Since $\{1\} \times \emptyset = \emptyset = f$, so $f$ is not function neither bijection.



Suppose there were such a bijection $f: \{1,2,\dots,n\} \rightarrow A, n > 1$.



(3) From this sentence, I know that proof is given by contradiction.



If $f(n) = n$ or $n \notin A$ then $f$ restricted to $\{1,2, \dots, n-1\}$ gives a bijection of $\{1,2\dots,n-1\}$ to its proper subset.




(4) In the proof is not explicitly written induction hypothesis, but it may look as "Suppose for $n-1$ there is no bijection between $\{1,2, \dots,n-1\}$ to $A \subset \{1,2, \dots,n-1\}$". So, we reach a contradiction in this case.



If $f(n) = i \neq n$ and $f(j) = n$ for some $j < n$ then define $g(j) = i, g(k) = f(k)$ for $k\neq j, n$. This g is again a bijection of $\{1,2, \dots, n-1\}$ on its proper subset.



(5) Again, we reached a contradiction. Clearly $f(n) = n$ or $f(n) \neq n$ so we consider all possibilities.

matrices - Determinant of a matrix with specific main diagonal


Determine the determinant of the following matrix: $$A = \begin{pmatrix}1+a_1 &1 &\cdots &1 \\ 1 &1 +a_2&\ddots& \vdots \\ \vdots & \ddots &\ddots&1 \\ 1 & \cdots &1& 1 +a_n \end{pmatrix}, a_i \in \mathbb R, i=1,2,\ldots, n.$$


My attempt: Set $I = \{ i \mid a_i =0 , i=1,\ldots,n\}$. I found that $$\det A = \begin{cases} 0 \text{ if } |I| \ge 2 \\\\ \prod_{j=1, j\ne i}^n a_j \text{ if } |I| =1 \\\\ \prod_{j=1}^n a_j\left( 1+ \sum_{j=1}^n \dfrac1{a_j} \right) \text{ if } |I| =0. \end{cases}.$$ However I am given a different answer. Any comment would be appreciated.


Answer




Clearly the answer should be totally symmetric in the $a_i$, but this is one of those annoying problems where we seem to have to live without the symmetry to actually solve it.


Your first line is obviously correct, since you have two of the same row.


If, WLOG, $a_1=0$ and the others nonzero, you use row operations to subtract the first row from every other row, which gives you a lower diagonal matrix, and I agree with your second answer.


Now if $a_1 \neq 0$, doing the same thing gives you $$ \det{A} = \begin{vmatrix} 1+a_1 & 1 & 1 & \cdots & 1 \\ -a_1 & a_2 & 0 & \cdots & 0 \\ -a_1 & 0 & a_3 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -a_1 & 0 & 0 & \cdots & a_n \end{vmatrix} $$ Now you can divide the $i$th column by $a_i$ and add it to the first row to give $$ \left(1+a_1+ a_1\sum_{i=2}^n \frac{1}{a_i}\right)\left( \prod_{i=2}^n a_i \right) = \left( \prod_{i=1}^n a_i \right) \left( 1 + \sum_{j=1}^n a_i \right), $$ which agrees with your answer. To get rid of the divisions, you can multiply through to get $$ \left( \prod_{i=1}^n a_i \right) + \sum_{j=1}^n \prod_{i \neq j} a_i, $$ i.e. the product of all the $a_i$ plus the symmetric polynomial of degree $n-1$ in the $a_i$, so for example for $n=3$, $$ \det{A} = a_1 a_2 a_3 + a_1 a_2 + a_2 a_3 + a_3 a_1 $$ This formula in this form still works for the zero cases: note that if two are zero, then all the products are zero because each only excludes one factor, and if one is zero, then the only one to survive is indeed the product that excludes it.


linear algebra - Characteristic Polynomial on 4 by 4 matrix




I have a matrix and I need to prove that it's diagonalizable for some values of an variable or not diagonalizable at all.



My thoughts are that that the easiest way to do so is by proving that the characteristic polynomial of the matrix is of the form:



$p(\lambda) = (\lambda - \lambda_1)(\lambda - \lambda_2) \cdots (\lambda - \lambda_K)$ where $\lambda_1, \lambda_2, \ldots, \lambda_K$ are the eigenvalues of the matrix.



In order to find the polynomial though I'm trying to solve the equation $\det (A- \lambda Ι_n) = 0$, where $A$ is my matrix.



That equation seems unsolvable on my $4 \times 4$ matrix which is:
\begin{pmatrix}

4 + l &11 &4 &3 + l\\
11 &13 &5 &8\\
4 &5 &7 &12\\
3 + l &8 &12 &18
\end{pmatrix}



My questions are these:



Is the format of my characteristic polynomial correct? Are $\lambda_1,\lambda_2, \ldots, \lambda_K$ supposed to be $A$'s eigenvalues? Is there any simplest way to produce the wanted result?


Answer




I'm a bit confused here - is this a trick question, or am I just not seeing something obvious?



The matrix that I see is a real symmetric matrix for all $l$. Hence, it is diagonalizable (even by orthogonal matrices) via the spectral theorem. Calculating the characteristic polynomial seems to be overkill...


calculus - Taylor Expansion for a Multivariable Function



\begin{align}
f(x_1,\dots,x_d) &= \sum_{n_1=0}^\infty \sum_{n_2=0}^\infty \cdots \sum_{n_d = 0}^\infty
\frac{(x_1-a_1)^{n_1}\cdots (x_d-a_d)^{n_d}}{n_1!\cdots n_d!}\,\left(\frac{\partial^{n_1 + \cdots + n_d}f}{\partial x_1^{n_1}\cdots \partial x_d^{n_d}}\right)(a_1,\dots,a_d) \\
&= f(a_1, \dots,a_d) \\
&\quad + \sum_{j=1}^d \frac{\partial f(a_1, \dots,a_d)}{\partial x_j} (x_j - a_j) \\
&\quad + \sum_{j=1}^d \sum_{k=1}^d \frac{1}{2!} \frac{\partial^2 f(a_1, \dots,a_d)}{\partial x_j \partial x_k} (x_j - a_j)(x_k - a_k) \\
&\quad + \sum_{j=1}^d\sum_{k=1}^d\sum_{l=1}^d \frac{1}{3!} \frac{\partial^3 f(a_1, \dots,a_d)}{\partial x_j \partial x_k \partial x_l} (x_j - a_j)(x_k - a_k)(x_l - a_l) \\
&\quad + \dots

\end{align}



I have read through wikipedia, and when I saw the above formula, I didn't know how the second equality is justified. Anybody can help me please? (The first equality is assumed to be true by myself thus doesn't need to be proved)


Answer



The second RHS is an enumeration of the first RHS according to the value of $m=n_1+\cdots+n_d$. For $m=0$, one gets one term, which is $f(a_1, \dots,a_d)$. For $m=1$, one gets $d$ terms, which are the products $\frac{\partial f(a_1, \dots,a_d)}{\partial x_j}\cdot(x_j - a_j)$ for each $1\leqslant j\leqslant d$.
More generally, for each $m\geqslant0$, one gets $d^m$ terms, hence the multiple sums from $1$ to $d$ with $m$ sums.



To "sum" the above, one uses the identity
$$
\sum_{n_1=0}^\infty \sum_{n_2=0}^\infty \cdots \sum_{n_d = 0}^\infty A(n_1,\cdots,n_d) =\sum_{m=0}^\infty\sum_{\begin{array}{c}(n_1,\cdots,n_d)\\ n_1+\cdots+n_d=m\end{array}} A(n_1,\cdots,n_d),

$$
with
$$
A(n_1,\cdots,n_d)=\frac{\partial^m f(a_1, \dots,a_d)}{\partial^{n_1} x_1\cdots \partial^{n_d} x_d}\cdot\prod_{j=1}^m (x_j - a_j)^{n_j}.
$$


calculus - Limit of $sin(1/x)$ - why there is no limit?


$$ \lim_{x\to 0+} \sin\left(\frac{1}{x}\right)$$ I know that there is no limit.


but, why there is no limit? I tried $x=0.4$, $x=0.3$, $x=0.1$, it looks like the limit is $0$.


And how can I show that there is no limit? I tried to calculate it like all the other functions, and I got wrong result and I don't know why:


$$\lim_{x \to 0+} \sin\left(\frac{1}{x}\right) = \sin\left(\frac{1}{0^+}\right) = \sin\left(\frac{1}{\infty}\right) = \sin(0) = 0.$$


Answer



Why there is no limit?


The graphic can help you understand why and suggest you some approach for the proof:


enter image description here



Remark: You have to be careful with tables of values because they can be misleading:


\begin{array}{ c | c c c c } x & \frac{1}{2\pi} & \frac{1}{3\pi} & \frac{1}{4\pi} &\frac{1}{5\pi} \\ \hline \sin\left(\frac{1}{x}\right) & 0 & 0 & 0 & 0 \\ \end{array}


\begin{array}{ c | c c c c } x & \frac{2}{5\pi} & \frac{2}{9\pi} & \frac{2}{13\pi} &\frac{2}{17\pi} \\ \hline \sin\left(\frac{1}{x}\right) & 1 & 1 & 1 & 1 \\ \end{array}


(The tables above are a sketch of the proof - see Theorem 2.4 here.)


Thursday, July 18, 2019

trigonometry - Induction proof of the identity $cos x+cos(2x)+cdots+cos (nx) = frac{sin(frac{nx}{2})cosfrac{(n+1)x}{2}}{sin(frac{x}{2})}$





Prove that:$$\cos x+\cos(2x)+\cdots+\cos (nx)=\frac{\sin(\frac{nx}{2})\cos\frac{(n+1)x}{2}}{\sin(\frac{x}{2})}.\ (1)$$




My attempt:$$\sin\left(\frac{x}{2}\right)\sum_{k=1}^{n}\cos{(kx)}$$$$=\sum_{k=1}^{n}\sin\left(\frac{x}{2}\right)\cos{(kx)} $$ (Applying $\sin x\cos y =\frac{1}{2}\left[\sin(x+y)+\sin(x-y)\right]$)
$$=\sum_{k=1}^{n}\frac{1}{2}\sin\left(\frac{x}{2}-kx\right)+\frac{1}{2}\sin\left(\frac{x}{2}+kx\right).$$Multiplying $(1)$ by $\sin\left(\frac{x}{2}\right)$ ,gives: $$\sum_{k=1}^{n}\sin\left(\frac{x}{2}\right)\cos(kx)=\sin\left(\frac{nx}{2}\right)\cos{\frac{(n+1)x}{2}}$$ $$\Leftrightarrow\sum_{k=1}^{n}\left[\sin\left(\frac{x}{2}-kx\right)+\sin\left(\frac{x}{2}+kx\right)\right]=\sin\left(-\frac{x}{2}\right)+\sin\left(\frac{x}{2}+nx\right).$$Then, by induction on $n$ and using $(\sin x\sin y)$ and $(\sin x +\sin y)$ formulas, I end in here: $$\sum_{k=1}^{n+1}\sin\left(\frac{x}{2}\right)\cos(kx)=\left[2\sin\left(\frac{nx}{2}\right)\cos\frac{(n+1)x}{2}\right]+\left[2\sin\left(\frac{x}{2}\right)\cos(n+1)x\right].$$ Any help would be appreciated, thanks!


Answer




Here is the induction step: it comes down to proving
\begin{gather*}\frac{\sin \dfrac{nx}{2} \cos\dfrac{(n+1)x}{2}}{\sin\dfrac{x}{2}}+\cos(n+1)x=\frac{\sin\dfrac{(n+1)x}{2}\cos\dfrac{(n+2)x}{2}}{\sin(\dfrac{x}{2})}\\
\text{or}\qquad\sin\dfrac{nx}{2}\cos\dfrac{(n+1)x}{2}+\sin\dfrac{x}{2}\cos(n+1)x=\sin\dfrac{(n+1)x}{2} \cos\dfrac{(n+2)x}{2}
\end{gather*}
Now use the linearisation formulae:
\begin{cases}\displaystyle
\sin\frac{nx}{2}\cos\frac{(n+1)x}{2}=\tfrac12\biggl(\sin\frac{(2n+1)x}{2}-\sin\frac x2\biggr),\\[1ex]
\sin\dfrac{x}{2}\cos(n+1)x=\frac12\biggl(\sin\dfrac{(2n+3)x}{2}-\sin\dfrac{(2n+1)x}{2}\biggr),
\end{cases}
whence the l.h.s. is

$$\frac12\biggl(\sin\dfrac{(2n+3)x}{2}-\sin\frac x2\biggr)=\sin\frac{(n+1)x}2\,\cos\frac{(n+2)x}2$$
by the factorisation formula: $\;\sin p-\sin q=2\sin\dfrac{p-q}2\cos\dfrac{p+q}2$.


functional equations - Let $f$ a continuous function defined on $mathbb R$ such that $forall x,y

Let $f$ a continuous function defined on $\mathbb R$ such that $\forall x,y \in \mathbb R :f(x+y)=f(x)+f(y)$



Prove that :
$$\exists a\in \mathbb R , \forall x \in \mathbb R, f(x)=ax$$

discrete mathematics - Calculate $15^{843} pmod{11}$





Calculate $15^{843} \pmod{11}$




My solution



Fermat's little theorem



Since $15 \equiv 4 \pmod{11}$ and according Fermat's Little Theorem




$$4^{10} \equiv 1 \pmod{11}\;,$$ we shall have



$$15^{843} \equiv 4^{843} \equiv 4^{840} \times 4^3 \equiv (4^{10})^{84} \times 4^3 \equiv 4^3 \equiv 64 \equiv 9 \pmod{11}$$



Is this correct$?$


Answer




Is this correct?





It is.



Fermat's little theorem is indeed one very useful tool to finding the answer, and in case you have any doubt whether you got that part right, you can verify that:



$$4^{10}=1048576=11\times 95325+1\equiv1\pmod{11}$$



The rest is just using the properties of modular arithmetic to replace some terms by congruent but simpler terms. Done explicitely, and correctly, so I see no reason to doubt the result.


Wednesday, July 17, 2019

elementary number theory - the exponent of the highest power of p dividing n!

The formula for the exponent of the highest power of prime $p$ dividing $n!$ is $\sum \frac{n}{p^k}$, but the question is $n=1000!$ (really, it has the factorial) and $p=5$.



When I use Wolfram Alpha , I panicked because the number has $2,567$ decimal digits.



I think if I write this number I'd need paper all the way to the Amazon.



Perhaps I misunderstand the formula?

sequences and series - Limits Problem : $lim_{n to infty}[(1+frac{1}{n})(1+frac{2}{n})cdots(1+frac{n}{n})]^{frac{1}{n}}$ is equal to..




Problem:



How to find the following limit :




$$\lim_{n \to \infty}[(1+\frac{1}{n})(1+\frac{2}{n})\cdots(1+\frac{n}{n})]^{\frac{1}{n}}$$ is equal to



(a) $\frac{4}{e}$



(b) $\frac{3}{e}$



(c) $\frac{1}{e}$



(d) $e$




Please suggest how to proceed in this problem thanks...


Answer



$$\log\left(\lim_{n \to \infty}[(1+\frac{1}{n})(1+\frac{2}{n})\cdots(1+\frac{n}{n})]^{\frac{1}{n}}\right) =\lim_{n \to \infty}\frac{\log(1+\frac{1}{n})+\log(1+\frac{2}{n})+\cdots+\log(1+\frac{n}{n})}{n} =\int_{1}^2 \log(1+x)dx= [x\log(x)-x]_{x=1}^{x=2}=2\log(2)-1$$



This yields the solution $e^{2\log(2)-1}=4/e$.


sequences and series - Different methods to compute $sumlimits_{k=1}^infty frac{1}{k^2}$ (Basel problem)



As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem)

$$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.$$
However, Euler was Euler and he gave other proofs.



I believe many of you know some nice proofs of this, can you please share it with us?


Answer



OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9
(EDIT: ...which is actually the proof that I read in Aigner & Ziegler).



When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus
$$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$

Note that $1/\tan^2 x = 1/\sin^2 x - 1$.
Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum
the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$:
$$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$
Denoting the sum on the right-hand side by $S_n$, we can write this as
$$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$



Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with,
$$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$
Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short,

$$S_n = 4 S_{n-1} + 2.$$
Since $S_1=2$, the solution of this recurrence is
$$S_n = \frac{2(4^n-1)}{3}.$$
(For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)



We now have
$$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$
Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!


real analysis - Find $limlimits_{ntoinfty}frac{a_1+a_2+...+a_n}{1+frac{1}{sqrt2}+...+frac{1}{sqrt{n}}}$ with $a_1=1$ and $a_{n+1}=frac{1+a_n}{sqrt{n+1}}$




Let $(a_n)_{n\ge1}, a_1=1, a_{n+1}=\frac{1+a_n}{\sqrt{n+1}}$.
Find $$\lim_{n\to\infty} \frac{a_1+a_2+\cdots+a_n}{1+\frac{1}{\sqrt2}+\cdots+\frac{1}{\sqrt{n}}}$$





These is my try:



I intercalated the limit like that
$$L=\lim_{n\to\infty} \frac{a_1+a_2+\cdots+a_n}{\sqrt{n+1}}\frac{\sqrt{n+1}}{1+\frac{1}{\sqrt2}+\cdots+\frac{1}{\sqrt{n}}}$$.
The second term of the limit tends to 2.
The first one, after Cesaro-Stols, become:
$$\lim_{n\to\infty}a_{n+1}(\sqrt{n+1}+\sqrt{n+2})$$
I tried to intercalate the term $a_n$ between 2 terms in function of n, just like $a_n<\frac{1}{\sqrt{n}}$ or something like that to use the sandwich theorem. Any ideas of this kind of terms? Or other ideas for the problem?


Answer




Stolz–Cesàro is a way to go, but applied to
$S_n=\sum\limits_{k=1}^n a_n$ and $T_n=\sum\limits_{k=1}^n \frac{1}{\sqrt{k}}$, where $T_n$ is strictly monotone and divergent sequence ($T_n >\sqrt{n}$). Then
$$\lim\limits_{n\rightarrow\infty}\frac{S_{n+1}-S_n}{T_{n+1}-T_n}=
\lim\limits_{n\rightarrow\infty}\frac{a_{n+1}}{\frac{1}{\sqrt{n+1}}}=
\lim\limits_{n\rightarrow\infty} \left(1+a_n\right)=\\
\lim\limits_{n\rightarrow\infty} \left(1+\frac{1+a_{n-1}}{\sqrt{n}}\right)=
\lim\limits_{n\rightarrow\infty} \left(1+\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{n(n-1)}}+\frac{a_{n-2}}{\sqrt{n(n-1)}}\right)=\\
\lim\limits_{n\rightarrow\infty} \left(1+\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{n(n-1)}}+\frac{1}{\sqrt{n(n-1)(n-2)}}+...+\frac{a_1}{\sqrt{n!}}\right)=\\
1+\lim\limits_{n\rightarrow\infty} \left(\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+...+\frac{1}{\sqrt{(n-1)!}}\right)\right)$$







Now, for
$$\lim\limits_{n\rightarrow\infty} \left(\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+...+\frac{1}{\sqrt{(n-1)!}}\right)\right) \tag{1}$$
we have
$$0<\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+\frac{1}{\sqrt{(n-1)(n-2)(n-3)}}+...+\frac{1}{\sqrt{(n-1)!}}\right)<
\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{1}{\sqrt{(n-1)(n-2)}}+\frac{1}{\sqrt{(n-1)(n-2)}}+...+\frac{1}{\sqrt{(n-1)(n-2)}}\right)
=\\\frac{1}{\sqrt{n}}\left(1+\frac{1}{\sqrt{n-1}}+\frac{n-3}{\sqrt{(n-1)(n-2)}}\right)\rightarrow 0$$







Finally, $(1)$ has $0$ as the limit, $\frac{S_{n+1}-S_n}{T_{n+1}-T_n}$ has $1$ as the limit. The original sequence has $1$ as the limit as well.


Tuesday, July 16, 2019

abstract algebra - Arbitrary number of algebraically independent elements of prime field: when is that possible?


If we consider the prime field $\mathbb{Q}$, and the field extension $\mathbb{Q}\subseteq\mathbb{C}$, then for any natural $n$ we can find $a_1,...,a_n\in\mathbb{C}$ algebraically independent over $\mathbb{Q}$. To prove this, we use induction on $n$, where the case $n=1$ is trivially true. Now if there are $a_1,...,a_{n-1}$ algebraically independent, we can (if I'm not mistaken) consider an algebraic closure of $\mathbb{Q}(a_1,...,a_{n-1})$, which would still be in $\mathbb{C}$, but also still countable, since $\mathbb{Q}$ and $\mathbb{Q}(a_1,...,a_{n-1})$ are. Hence there must be an $a_n$ as desired.


Is this proof correct so far?



I'm also interested in a more general statement about fields in arbitrary characteristic, something like:



Let $F$ be a prime field. Then for any natural $n$ we can find a field $K$ containing $F$ and elements $a_1,...,a_n\in K$ algebraically independent over $F$.



Unfortunately, I'm not very used to fields, and finite fields especially. I don't know if this statement is true anyway. Could you tell me? If it should be true, is there a way (in finite characteristic) to choose one $K$ for all $n$, and even maybe use a similar way of proving the statement?


Answer



Your proof is correct.


As for your second question: for any field $K$ the set $K(x_1,\ldots ,x_n)$ of (formal) rational functions in $n$ variables is a field containing $K$, and the elements $x_1,\ldots ,x_n$ are algebraically independent over $K$.


One can generalize the last statement: let $X$ be a set. Then the set $K(X)$ of (formal) rational functions in the variables $x\in X$ is a field containing $K$, and the elements $x\in X$ are algebraically independent over $K$.


Note that in a specific rational function only finitely many $x\in X$ actually appear.



H


inequality - Proof by induction - Being stuck before being able to prove anything!



I am currently in the process of learning how to prove statements
by induction. For the most part, I understand but I am at most
clueless when it comes to prove some inequalities.




For example, I am having a hard time proving that:

$n < 2^n$ is true. (Even if it's clear that it is indeed true!)



In the induction step, I am stuck with:

$n + 1 < 2^{n + 1}$



I don't know what would be the assumption I should make from this
statement.




Anyone could point me in the right direction?
Thanks!


Answer



Hint $\ $ First prove by induction this lemma: an increasing function stays $\ge$ its initial value, i.e. $\rm\:f(n\!+\!1)\ge f(n)\:\Rightarrow\:f(n)\ge f(0).$ Now apply this lemma to the function $\rm\:f(n) = 2^n\! - n,\:$ which is increasing by $\rm\:f(n\!+\!1)-f(n) = 2^n\!-1 \ge 0.\:$ Thus, by the lemma, $\rm\:f(n)\ge f(0) = 1,\:$ so $\rm\:2^n > n.$



Remark $\ $ Note that we reduce the proof to the same inequality as in Will's answer: $\rm\:2^n \ge 1$ (which, itself, may require an inductive proof, depending on the context). But here we've injected the additional insight that the inductive proof of the inequality can be viewed as a special case of the inductive proof of the inequality that an increasing function stays $\ge$ its initial value - which lends much conceptual insight into the induction process. Further, by abstracting out the lemma, we now have a tool that can be reused for analogous inductive proofs (and this simple tool often does the job - see my many prior posts on telescopy for further examples and discussion).


calculus - Prove that $int_0^infty frac{sin nx}{x}dx=frac{pi}{2}$




There was a question on multiple integrals which our professor gave us on our assignment.





QUESTION: Changing order of integration, show that $$\int_0^\infty \int_0^\infty e^{-xy}\sin nx \,dx \,dy=\int_0^\infty \frac{\sin nx}{x}dx$$
and hence prove that $$\int_0^\infty \frac{\sin nx}{x}dx=\frac{\pi}{2}$$







MY ATTEMPT: I was successful in proving the first part.




Firstly, I can state that the function $e^{-xy}\sin nx$ is continuous over the region $\mathbf{R}=\{(x,y): 0

$$\int_0^\infty \int_0^\infty e^{-xy}\sin nx \,dx \,dy$$
$$=\int_0^\infty \sin nx \left\{\int_0^\infty e^{-xy}\,dy\right\} \,dx$$
$$=\int_0^\infty \sin nx \left[\frac{e^{-xy}}{-x}\right]_0^\infty \,dx$$
$$ =\int_0^\infty \frac{\sin nx}{x}dx$$



However, the second part of the question yielded a different answer.



$$\int_0^\infty \int_0^\infty e^{-xy}\sin nx \,dx \,dy$$

$$=\int_0^\infty \left\{\int_0^\infty e^{-xy} \sin nx \,dx\right\} \,dy$$
$$=\int_0^\infty \frac{ndy}{\sqrt{n^2+y^2}}$$



which gives an indeterminate result, not the desired one.



Where did I go wrong? Can anyone help?


Answer



You should have obtained $$\int_{x=0}^\infty e^{-yx} \sin nx \, dx = \frac{n}{n^2 + y^2}.$$ There are a number of ways to show this, such as integration by parts. If you would like a full computation, it can be provided upon request.







Let $$I = \int e^{-xy} \sin nx \, dx.$$ Then with the choice $$u = \sin nx, \quad du = n \cos nx \, dx, \\ dv = e^{-xy} \, dx, \quad v = -\frac{1}{y} e^{-xy},$$ we obtain $$I = -\frac{1}{y} e^{-xy} \sin nx + \frac{n}{y} \int e^{-xy} \cos nx \, dx.$$ Repeating the process a second time with the choice $$u = \cos nx \, \quad du = -n \sin nx \, dx, \\ dv = e^{-xy} \, dx, \quad v = -\frac{1}{y} e^{-xy},$$ we find $$I = -\frac{1}{y}e^{-xy} \sin nx - \frac{n}{y^2} e^{-xy} \cos nx - \frac{n^2}{y^2} \int e^{-xy} \sin nx \, dx.$$ Consequently $$\left(1 + \frac{n^2}{y^2}\right) I = -\frac{e^{-xy}}{y^2} \left(y \sin nx + n \cos nx\right),$$ hence $$I = -\frac{e^{-xy}}{n^2 + y^2} (y \sin nx + n \cos nx) + C.$$ Evaluating the definite integral, for $y, n > 0$, we observe $$\lim_{x \to \infty} I(x) = 0, \quad I(0) = -\frac{n}{n^2 + y^2},$$ and the result follows.


quadratics - Intuitive meaning behind the Discriminant

While I was applying the quadratic formula, it just suddenly dawned to me that the quadratic formula was basically finding the vertex and adding or subtracting (the plusminus) half the distance between the roots. $\frac{-b}{2a}$ represents the vertex and the $\pm \frac{\sqrt{\triangle}}{2a}$ represents half the distance between the roots. I understand why $-\frac{b}{2a}$ is the xcoor of the vertex, it intuitively makes sense. But why does the $\frac{\sqrt{\triangle}}{2a}$ equal half the distance between the roots? i know how to prove it but I dont why it intuitively works.
Edit: I understand why the $\frac{\sqrt{\triangle}}{2a}$ is half the distance from the vertex but why is the discriminant B^2 - 4ac. I know thats what happens when you complete the square but when you use algebra you often lose the intuitive sense of what's happening.

Monday, July 15, 2019

Algebraic Manipulation for Mathematical Induction




I'm working on a mathematical induction problem. The question is as follows:



$P = \begin{pmatrix}
1-A & A \\
B & 1-B \\
\end{pmatrix}$



for A,B $\epsilon$ (0, 1). Show by induction, or otherwise that
$P^n = \frac{1}{A+B} \begin{pmatrix}

B & A \\
B & A \\
\end{pmatrix} + \frac{(1-A-B)^n}{A+B} \begin{pmatrix}
A & -A \\
-B & B \\
\end{pmatrix}$



for any n $\epsilon $ $\Bbb N$.



I understand how induction is done, however I'm lost with the algebraic manipulation. So far I've proved that P(1) is true by substituting n = 1 into the equation. When it comes to proving that p(k+1) is true however, I get lost in the algebra. I know I have to show something like




$P^k = \frac{1}{A+B} \begin{pmatrix}
B & A \\
B & A \\
\end{pmatrix} + \frac{(1-A-B)^k}{A+B} \begin{pmatrix}
A & -A \\
-B & B \\
\end{pmatrix} = \begin{pmatrix}
1-A & A \\
B & 1-B \\

\end{pmatrix}^k$



$P^{k+1} = \frac{1}{A+B} \begin{pmatrix}
B & A \\
B & A \\
\end{pmatrix} + \frac{(1-A-B)^{k+1}}{A+B} \begin{pmatrix}
A & -A \\
-B & B \\
\end{pmatrix} = \begin{pmatrix}
1-A & A \\

B & 1-B \\
\end{pmatrix}^{k+1} = \begin{pmatrix}
1-A & A \\
B & 1-B \\
\end{pmatrix}P^k = \begin{pmatrix}
1-A & A \\
B & 1-B \\
\end{pmatrix}[\frac{1}{A+B}\begin{pmatrix}
B & A \\
B & A \\

\end{pmatrix} + \frac{(1-A-B)^k}{A+B} \begin{pmatrix}
A & -A \\
-B & B \\
\end{pmatrix}]$



Any help would be greatly appreciated


Answer



Show us what you get if you carry out the computation on the right hand side of your last equation: Distribute the matrix multiplication over the matrix addition, and perform the matrix multiplication. You can take the scalar multiples outside during this, and consider them afterwards.


divergent series - How is the Riemann zeta function zero at the negative even integers?

I'm not sure if I'm misunderstanding this in any way, but surely evaluating the zeta function at, say, -2, would give




$1 + 2^2 + 3^2 + 4^2 + 5^2 + ...$



which seems to diverge to infinity. What am I missing?

discrete mathematics - Proof of the Hockey-Stick Identity: $sumlimits_{t=0}^n binom tk = binom{n+1}{k+1}$




After reading this question, the most popular answer use the identity
$$\sum_{t=0}^n \binom{t}{k} = \binom{n+1}{k+1}.$$



What's the name of this identity? Is it the identity of the Pascal's triangle modified.



How can we prove it? I tried by induction, but without success. Can we also prove it algebraically?



Thanks for your help.







EDIT 01 : This identity is known as the hockey-stick identity because, on Pascal's triangle, when the addends represented in the summation and the sum itself are highlighted, a hockey-stick shape is revealed.



Hockey-stick


Answer



This is purely algebraic. First of all, since $\dbinom{t}{k} =0$ when $k>t$ we can rewrite the identity in question as
$$\binom{n+1}{k+1} = \sum_{t=0}^{n} \binom{t}{k}=\sum_{t=k}^{n} \binom{t}{k}$$



Recall that (by the Pascal's Triangle),
$$\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$$




Hence
$$\binom{t+1}{k+1} = \binom{t}{k} + \binom{t}{k+1} \implies \binom{t}{k} = \binom{t+1}{k+1} - \binom{t}{k+1}$$



Let's get this summed by $t$:
$$\sum_{t=k}^{n} \binom{t}{k} = \sum_{t=k}^{n} \binom{t+1}{k+1} - \sum_{t=k}^{n} \binom{t}{k+1}$$



Let's factor out the last member of the first sum and the first member of the second sum:
$$\sum _{t=k}^{n} \binom{t}{k}
=\left( \sum_{t=k}^{n-1} \binom{t+1}{k+1} + \binom{n+1}{k+1} \right)

-\left( \sum_{t=k+1}^{n} \binom{t}{k+1} + \binom{k}{k+1} \right)$$



Obviously $\dbinom{k}{k+1} = 0$, hence we get
$$\sum _{t=k}^{n} \binom{t}{k}
=\binom{n+1}{k+1}
+\sum_{t=k}^{n-1} \binom{t+1}{k+1}
-\sum_{t=k+1}^{n} \binom{t}{k+1}$$



Let's introduce $t'=t-1$, then if $t=k+1 \dots n, t'=k \dots n-1$, hence
$$\sum_{t=k}^{n} \binom{t}{k}

= \binom{n+1}{k+1}
+\sum_{t=k}^{n-1} \binom{t+1}{k+1}
-\sum_{t'=k}^{n-1} \binom{t'+1}{k+1}$$



The latter two arguments eliminate each other and you get the desired formulation
$$\binom{n+1}{k+1}
= \sum_{t=k}^{n} \binom{t}{k}
= \sum_{t=0}^{n} \binom{t}{k}$$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...