Monday, September 30, 2019

calculus - Conditions about continuous functions

Say we have $f(x+y) = f(x) + f(y) \quad \forall x,y \in \mathbb R$ and $f$ is continuous at one point at least. I wish to show there must be some $c$ such that $f(x)=cx$ for all $x$. Think I can do so by first showing $f$ is continuous everywhere I'm not sure how then let $f(q) = 1$ somehow and show that $f(q) = cq$ where $q$ is rational. But then the aim is to show for all real $x$ so I am not sure~

linear algebra - Characteristic Polynomial of a matrix and its transpose

Does a matrix and and its transpose have the same characteristic polynomial? I know that they have the same eigenvalues but different eigenvectors. Does having the same eigenvalues mean they share the same characteristic polynomial?

probability - What is the intuition behind the Poisson distribution's function?



I'm trying to intuitively understand the Poisson distribution's probability mass function. When $X \sim \mathrm{Pois}(\lambda)$, then $P(X=k)=\frac{\lambda^k e^{-k}}{k!}$, but I don't see the reasoning behind this formula. In other discrete distributions, namely the binomial, geometric, negative binomial, and hypergeometric distributions, I have an intuitive, combinatorics-based understanding of why each distribution's pmf is defined the way it is.



That is, if $Y \sim\mathrm{Bin}(n,p)$ then $P(Y=k)=\binom{n}{k}p^k(1-p)^{n-k}$, and this equation is clear - there are $\binom{n}{k}$ ways to choose the $k$ successful trials, and we need the trials to succeed $k$ times and fail $n-k$ times.



What is the corresponding intuition for the Poisson distribution?


Answer




Explanation based on DeGroot, second edition, page 256. Consider the binomial distribution with fixed $p$
$$
P(X = k) = {n \choose k}p^k(1-p)^{n-k}
$$



Now define $\lambda = np$ and thus $p = \frac{\lambda}{n}$.



$$
\begin{align}
P(X = k) &= {n \choose k}p^k(1-p)^{n-k}\\

&=\frac{n(n-1)(n-2)\cdots(n-k+1)}{k!}\frac{\lambda^k}{n^k}\left(1-\frac{\lambda}{n}\right)^{n-k}\\
&=\frac{\lambda^k}{k!}\frac{n}{n}\cdot\frac{n-1}{n}\cdots\frac{n-k+1}{n}\left(1-\frac{\lambda}{n}\right)^n\left(1-\frac{\lambda}{n}\right)^{-k}
\end{align}
$$
Let $n \to \infty$ and $p \to 0$ so $np$ remains constant and equal to $\lambda$.



Now
$$
\lim_{n \to \infty}\frac{n}{n}\cdot\frac{n-1}{n}\cdots\frac{n-k+1}{n}\left(1-\frac{\lambda}{n}\right)^{-k} = 1
$$

since in all the fractions, $n$ climbs at the same rate in the numerator and the denominator and the last parentheses has the fraction going to $0$. Furthermore
$$
\lim_{n \to \infty}\left(1-\frac{\lambda}{n}\right)^n = e^{-\lambda}
$$
so under our definitions
$$
\lim_{n \to \infty} = {n \choose k}p^k(1-p)^{n-k} = \frac{\lambda^k}{k!}e^{-\lambda}
$$
In other words, as the probability of success becomes a rate applied to a continuum, as opposed to discrete selections, the binomial becomes the Poisson.




Update with key point from comments



Think about a Poisson process. It really is, in a sense, looking at very, very small intervals of time and seeing if something happened. The "very, very, small" comes from the need that we really only see at most one instance per interval. So what we have is pretty much an infinite sum of infinite Bernoullis. When we have a finite sum of finite Bernoullis, that is binomial. When it is infinite, but with finite probability $np=λ$, it is Poisson.


Sunday, September 29, 2019

calculus - Evaluating $ int_{-infty}^{infty}xexpleft(-b^{2}left(x-cright)^{2}right)mathrm{erf}^{2}left(aleft(x-dright)right),mathrm{d}x $

I have big difficulties solving the following integral:

$$
\int_{-\infty}^{\infty}x\exp\left(-b^{2}\left(x-c\right)^{2}\right)\mathrm{erf}^{2}\left(a\left(x-d\right)\right)\,\mathrm{d}x
$$



I tried to use integration by parts, and also tried to apply the technique called “differentiation under the integration sign” but with no results.



I’m not very good at calculus so my question is if anyone could give me any hint of how to approach this integral. I would be ultimately thankful.



If it could help at all, I know that
$$

\int_{-\infty}^{\infty}x\exp\left(-b^{2}\left(x-c\right)^{2}\right)\mathrm{erf}\left(a\left(x-d\right)\right)\,\mathrm{d}x=\frac{a}{b^{2}\sqrt{a^{2}+b^{2}}}\exp\left(-\frac{a^{2}b^{2}\left(c-d\right)^{2}}{a^{2}+b^{2}}\right)+\frac{\sqrt{\pi}c}{b}\mathrm{erf}\left(\frac{ab\left(c-d\right)}{\sqrt{a^{2}+b^{2}}}\right),
$$



for $b>0$.

probability - Birthday "Paradox" - another, different, version!

Background


Many people are familiar with the so-called Birthday "Paradox" that, in a room of $23$ people, there is a better than $50/50$ chance that two of them will share the same birthday. In its more general form for $n$ people, the probability of no two people sharing the same birthday is $p(n) = \large\frac{365!}{365^n(365-n)!}$. Similar calculations are used for understanding hash-space sizes, cryptographic attacks, etc.


Motivation


The reason for asking the following question is actually related to understanding a specific financial market behavior. However, a variant on the "Birthday Paradox" problem fits perfectly as an analogy and is likely to be of wider interest to more people with different backgrounds. My question is therefore framed along the lines of the familiar "Birthday Paradox", but with a difference, as follows.


Situation


There are a total of $60$ people in a room. Of these, it turns out that there are $11$ pairs of people who share the same birthday, and two triples (i.e. groups of $3$ people) who have the same birthday. The remaining $60 - 11\cdot2 - 2\cdot3 = 32$ people have different birthdays. Assuming a population in which any day is equally likely for a birthday (i.e. ignore Feb 29th & possible seasonal effects) and, given the specified distribution of birthdays mentioned above, the questioner would actually like to understand how likely (or unlikely) it is that these $60$ people were really chosen randomly. However, I am not sure if the question posed in that way is actually answerable at all. When I posed this question on another site (where it was left unanswered), I was at least advised to re-state the question in a slightly different way, as follows below.


Question


If $60$ people are chosen at random from a population in which any day is equally likely to be a person's birthday, what is the probability that there are $11$ days on which exactly $2$ people share a birthday, two days on which exactly $3$ of them share a birthday, and no days on which $4$ or more of them share a birthday?

linear algebra - Find the characteristic polynomial $P_A(lambda)$ of this matrix



Consider the following matrix A:



$$ A=
\begin{pmatrix}
-1 & 1 & 1 \\
1 & -1 & 1 \\

1 & 1 & -1 \\
\end{pmatrix}
$$



I have to find the characteristic polynomial $P_A(\lambda)$ using the following approach:



$$P_A(\lambda)=\det(A-\lambda I)$$



I worked out the first part:




$$\begin{vmatrix}
-1-\lambda & 1 & 1 \\
1 & -1-\lambda & 1 \\
1 & 1 & -1-\lambda \\
\end{vmatrix}
$$



But then I get stuck calculating the determinant with all those $\lambda$ floating around.



Help? :( The answer is supposed to be $P_A(\lambda)=-(\lambda-1)(\lambda+2)^2$



Answer



You could use properties of determinants to avoid having to factor a cubic afterwards; for example:




  • subtract the last column from the first two;

  • add the first two rows to the third:



$$\begin{vmatrix}
-1-\lambda & 1 & 1 \\

1 & -1-\lambda & 1 \\
1 & 1 & -1-\lambda \\
\end{vmatrix}=\begin{vmatrix}
-2-\lambda & 0 & 1 \\
0 & -2-\lambda & 1 \\
2+\lambda & 2+\lambda & -1-\lambda \\
\end{vmatrix}=\begin{vmatrix}
-2-\lambda & 0 & 1 \\
0 & -2-\lambda & 1 \\
0 & 0 & 1-\lambda \\

\end{vmatrix}$$
This is the determinant of a diagonal matrix, so it is the product of the diagonal elements:
$$\left( -2-\lambda \right)^2\left( 1-\lambda \right) \color{blue}{ = 0 \iff \lambda = -2 \;\vee\; \;\lambda = 1}$$


Saturday, September 28, 2019

exponentiation - What is 0 raised to 0 ???!!!!




I have read many articles on this confusion but i am still confused...




My simple question is -



What is $0^0$?



What is the present agreement to this?



I feel that it should be 1 as anything to the power zero is one....



I am currently a school student so i would like a more of a school based answer..




So incase it comes in my exam i should know what to write:)


Answer



$0^0$ is most often undefined. The reason is that it is not possible to define it in a good enough way. Notice the following examples:



$0^x$



Whenever $x \neq 0$ then this expression should equal to 0. However



$x^0$




should be $1$ whenever $x\neq 0$. Thus, if we define $0^0$ to either $0$ or $1$ then we get problems with these functions not being continous (without jumps if you plot them) where they are defined, which is why we keep $0^0$ undefined in most cases.


Summation with a variable as the upper limit



$$\sum_{n=1}^m \frac{n \cdot n! \cdot \binom{m}{n}}{m^n} = ?$$



My attempts on the problem:



I tried writing out the summation.




$$1+\frac{2(m+1)}{m} + \frac{3(m-1)(m-2)}{m^2} + \cdots + \dfrac{m\cdot m!}{m^m}$$



I saw that the ratio between each of the terms is $\dfrac{\dfrac{n}{n-1} (m-n+1)}{m}$



I wasn't able to proceed because this isn't a geometric series. Please help!



I would appreciate a full solution if possible.


Answer



Expanding the binomial ${m\choose n} = \frac{m!}{(m-n)!n!}$ your sum can be written

$$m!\sum_{n=1}^m \frac{1}{(m-n)!}\frac{n}{m^n}$$



We can now change the summation index $i = m-n$ (i.e. summing from $m$ down to $0$) to get
$$\frac{m!}{m^m}\sum_{i=0}^{m-1} \frac{m^i}{i!}(m-i) = \frac{m!}{m^m}\left[\sum_{i=0}^{m-1} m^{i+1}\frac{1}{i!} - \sum_{i=0}^{m-1} m^i\frac{i}{i!}\right]$$



Now use $\frac{i}{i!} = \frac{1}{(i-1)!}$ and change the summation index $j=i-1$ in the last sum and you will see that most of the terms will cancel giving you a simple result ($=m$).


How to prove using induction correctly




In our school, we learned that proving using induction has three steps:





  1. Prove it for the smallest number of $n$. (Usually, $n=1$)


  2. Think it is true for $n$.


  3. Prove it is true for $n+1$.






But recently, when I was watching Olympiad video series, it told me that it is not a right way and you should follow this way:





  1. Prove it for the smallest $n$.


  2. Think it is true for $n+1$.


  3. Prove it is true for $n$.


  4. You proved it is true for $n$. Now from that, prove it for $n+1$.






Now I am really confused that how to use induction? Can you tell me what is the best way to use introduction, and give me a guarantee that it always works?


Answer



The second method used in the video was probably this. When one is struggling with proving the inductive step $\,P_n\,\Rightarrow\, P_{n+1},\,$ it is often convenient to look not only at things that are implied by $\,P_n,\,$ but also at what things are equivalent to $\,P_{n+1}.\,$ For example, we might prove $\,P_n\,\Rightarrow\,A\,\Rightarrow\, A'\,$ and also $\,P_{n+1}\!\iff B\iff B' \iff A',\,$ so connecting the arrows



$$P_n\Rightarrow A\Rightarrow A'\Rightarrow B' \Rightarrow\ B\Rightarrow P_{n+1}$$



we obtain a complete proof of the inductive step. To do this we can assume that $\,P_{n+1}\,$ is true and then deduce the chain of equivalent statements $\,B^{(k)}\,$ by using only reversible steps, e.g. applying an invertible operation such as adding or subtracting some value to both sides of an equation (e.g. see this recent question). Note in particular that it does not suffice to deduce the wrong-direction unidirectional inferences $\,P_{n+1}\!\Rightarrow B\Rightarrow B'\Rightarrow\cdots$ since they do not allow us to connect the inferences to obtain the sought complete inference displayed above.



For an example of such deductions see this answer, where I deduce that the inductive step is equivalent to the truth of a recurrence $\,g_{n+1}- g_n = f_{n+1}\,$ when analyzing inductive proofs equivalent to telescopic sums (it will prove instructive to write out that equivalence in more detail to examine how the inferences reverse using addition and subtraction).




Such proof methods exploiting simultaneous forward and backward deduction is sometimes called (forward and) backward chaining, or more generally, (inductive) analysis and synthesis.


real analysis - Prove $2^ncdot n! ≤ (n+1)^n$ by induction.




An induction I'm struggling with.




Prove $2^n\cdot n! ≤ (n+1)^n$ by induction.




An idea was to show that $2^n\cdot n! ≤ 1+n^2$ since $1+n^2 ≤ (n+1)^n$ using Bernoulli. However the inequality is just wrong so that approach doesn't work. I had the intuition that $2^n ≤ n!$ but I don't think that yields anything for this problem.



I would really like to get a hint or two. Of course you can post your answer, this is obviously what this platform is for, but I won't read them until I solved the problem myself. It's an induction, can't be that difficult right?


Answer




Hint: The induction step goes as follows:
$$2^{n+1}(n+1)!=2^nn!2(n+1)\le(n+1)^n2(n+1)=2(n+1)^{n+1}$$
Thus you are left to prove that $2(n+1)^{n+1}\le(n+2)^{n+1}$, which is pretty easy.


limits - How to prove that $limlimits_{n to infty} frac{k^n}{n!} = 0$


It recently came to my mind, how to prove that the factorial grows faster than the exponential, or that the linear grows faster than the logarithmic, etc...


I thought about writing: $$ a(n) = \frac{k^n}{n!} = \frac{ k \times k \times \dots \times k}{1\times 2\times\dots\times n} = \frac k1 \times \frac k2 \times \dots \times \frac kn = \frac k1 \times \frac k2 \times \dots \times \frac kk \times \frac k{k+1} \times \dots \times \frac kn $$ It's obvious that after k/k, every factor is smaller than 1, and by increasing n, k/n gets closer to 0, like if we had $\lim_{n \to \infty} (k/n) = 0$, for any constant $k$.


But, I think this is not a clear proof... so any hint is accepted. Thank you for consideration.


Answer



If you know that this limit exists, you have $$ \lim_{n \to \infty} \frac{k^n}{n!} = \lim_{n \to \infty} \frac{k^{n+1}}{(n+1)!} = \lim_{n \to \infty} \frac k{n+1} \frac{k^n}{n!} = \left(\lim_{n \to \infty} \frac k{n+1} \right) \left( \lim_{n \to \infty} \frac {k^n}{n!} \right) = 0. $$ Can you think of a short way to show the limit exists? (You need existence to justify my factoring of the limits at the end. If you don't have that then there's no reason for equality to hold.)



Friday, September 27, 2019

Calculating Splitting Field Degree of Extension



Is there an easier way to calculate the degree of extension of a splitting field for a polynomial like $$x^7-3\quad\text{over}\quad\Bbb Z_5\,?$$My approach for several of these have been to find all roots in the given field (in this case, I think $x=2$ is the only root), then I can factor it via long division. In this case, I get $$x^7-3=(x-2)(x^6+2x^5+4x^4+3x^3+x^2+2x+4).$$At this point, I would check that $2$ is not a repeated root, and here it is easy to check that this is not the case. Since I've checked all of the other elements, at this stage I would adjoin a root of this polynomial, use long division and get a $5^\text{th}$ degree polynomial. Now, the new field I'm working in would have degree $6$, since it is the root of a $6^\text{th}$ degree irreducible polynomial, right?



At this point, it begins to feel like I'm searching for a needle in a haystack; I have several more elements that I have to begin trying, and for this particular problem, that gets to be overwhelming.



At some point, I had thought the map $\alpha\mapsto \alpha^p$, where $p=5$ in this case, would work, but I had another problem where that wasn't the case (specifically, I tried to find the splitting field of $x^5+x+1$ over $\Bbb Z_2$. Here it was easy to see it was irreducible, so I adjoined a root, lets call it $\gamma$, and using the method above, I found $\gamma^2$ was a root, but not $\gamma^4$).



So my question is the following: is there a better approach than what I'm doing to factor these (and in the process, find the degree of extension)?



Answer



Once you know one root of $x^7-3$, namely $2$, you get all others by multiplying with $7$-th roots of unity. Since $x^7-1$ and its derivative $7x^6$ have no common roots, the polynomial $x^7-1$ is separable, so $\overline{ \mathbb F}_5$ really contains $7$ different roots of unity, say $1,\zeta,\zeta^2,\ldots,\zeta^6$. Then $2,2\zeta,\ldots,2\zeta^6$ are the different roots of $x^7-3$.



Assume that the extension $\mathbb F_{5^n}$ of $\mathbb F_5$ contains $\zeta$. Then $\zeta$ generates a subgroup of $\mathbb F_{5^n}^\times$ of order $7$, so by Lagrange's theorem, $7|(5^n-1)$, or equivalently $5^n \equiv 1 \pmod 7$. Thus $n$ is a multiple of the order of $5$ modulo $7$.



Conversely, if $n \geq 1$ is such that $5^n \equiv 1 \pmod 7$, then $7$ divides $5^n-1$ and this implies that the polynomial $x^7-1$ divides $x^{5^n-1} - 1$. Since $\mathbb F_{5^n}$ is the splitting field of $x^{5^n-1} - 1$, we get $\zeta \in \mathbb F_{5^n}$.



This shows that $\mathbb F_5(\zeta) = \mathbb F_{5^n}$ where $n$ is the order of $5$ modulo $7$, which is $6$. Therefore, the splitting field of $x^7-3$ has degree $6$ over $\mathbb F_5$.



Of course these arguments only worked because we could reduce the problem to finding the degree of $\mathbb F_5(\zeta)$ over $\mathbb F_5$ where $\zeta$ is a primitive $7$-th root of unity. For general polynomials, things are probably more difficult.



trigonometry - Prove that $cosfrac {2pi}{7}+ cosfrac {4pi}{7}+ cosfrac {8pi}{7}=-frac{1}{2}$





Prove that
$$\cos\frac {2\pi}{7}+ \cos\frac {4\pi}{7}+ \cos\frac {8\pi}{7}=-\frac{1}{2}$$




My attempt




\begin{align}
\text{LHS}&=\cos\frac{2\pi}7+\cos\frac{4\pi}7+\cos\frac{8\pi}7\\
&=-2\cos\frac{4\pi}7\cos\frac\pi7+2\cos^2\frac{4\pi}7-1\\
&=-2\cos\frac{4\pi}7\left(\cos\frac\pi7-\cos\frac{4\pi}7\right)-1
\end{align}
Now, please help me to complete the proof.


Answer



$cos(2\pi/7)$+$cos(4\pi/7)$+$cos(8\pi/7)$



= $cos(2\pi/7)$+$cos(4\pi/7)$+$cos(6\pi/7)$ (angles add to give $2\pi$, thus one is $2\pi$ minus the other)




At this point, we'll make an observation



$cos(2\pi/7)$$sin(\pi/7)$ = $\frac{sin(3\pi/7) - sin(\pi/7)}{2}$ ..... (A)



$cos(4\pi/7)$$sin(\pi/7)$ = $\frac{sin(5\pi/7) - sin(3\pi/7)}{2}$ ..... (B)



$cos(6\pi/7)$$sin(\pi/7)$ = $\frac{sin(7\pi/7) - sin(5\pi/7)}{2}$ ..... (C)



Now, add (A), (B) and (C) to get




$sin(\pi/7)*(cos(2\pi/7)+cos(4\pi/7)+cos(6\pi/7))$ = $\frac{sin(7\pi/7) - sin(\pi/7)}{2}$ = -$sin(\pi/7)/2$



The $sin(\pi/7)$ cancels out from both sides to give you your answer.


number theory - Digit Root of $2^{m−1}(2^m−1)$ is $1$ for odd $m$. Why?



The Wiki page on Perfect numbers says:




[A]dding the digits of any even perfect number (except $6$), then adding the digits of the resulting number, and repeating this process until a single digit (called the digital root) is obtained, always produces the number $1$. For example, the digital root of $8128$ is $1$, because $8 + 1 + 2 + 8 = 19$, $1 + 9 = 10$, and $1 + 0 = 1$. This works ... with all numbers of the form $2^{m−1}(2^m−1)$ for odd integer (not necessarily prime) $m$.




How does that work?




Ok, this means for example perfect numbers (except $6$) never have a factor of $3$, but does this help...?


Answer



Let $e = m-1$ be even. Then $2^e$ can be congruent to one of $1, 4, 7$ modulo $9$. Then $2^{e+1}-1$ is congruent to $2\cdot 1-1 = 1,\, 2\cdot 4 - 1 = 7,\, 2\cdot 7 - 1 = 13 \equiv 4$ modulo $9$, hence $2^e(2^{e+1}-1)$ is congruent to $1\cdot 1$ or $4\cdot 7 = 28 \equiv 1$ modulo $9$.


combinatorics - Combinatorial identity: summation of stars and bars

I noticed that the following identity for a summation of stars and bars held for specific $k$ but I was wondering if someone could provide a general proof via combinatorics or algebraic manipulation. I wouldn't be surprised if this is a known result; it looks very similar to the Hockey Stick identity.



$$\sum_{i=0}^k {d+i-1 \choose d-1} = {d+k \choose k}$$




The left can be immediately rewritten as $\sum_{i=0}^k {d+i-1 \choose i}$ if it helps inspire intuition.

modular arithmetic - A positive integer (in decimal notation) is divisible by 11 $ iff $ ...



(I am aware there are similar questions on the forum)



What is the Question?




A positive integer (in decimal notation) is divisible by $11$ if and only if the
difference of the sum of the digits in even-numbered positions and the sum of digits in odd-numbered positions is divisible by $11$.




For example consider the integer 7096276.



The sum of the even positioned digits is $0+7+6=13.$ The sum of the odd positioned
digits is $7+9+2+6=24.$ The difference is $24-13=11$, which is divisible by 11.



Hence 7096276 is divisible by 11.





(a)



Check that the numbers 77, 121, 10857 are divisible using this fact, and that 24 and 256 are not divisible by 11.



(b)



Show that divisibility statement is true for three-digit integers $abc$.
Hint: $100 = 99+1$.




What I've Done?



I've done some research and have found some good explanations of divisibility proofs. Whether that be 3,9 or even 11. But...the question lets me take it as fact so I don't need to prove the odd/even divisible by 11 thing.



I need some help on the modular arithmetic on these.



For example... Is 77 divisible by 11? $$7+7 = 14 \equiv ...$$



I don't know what to do next.




Thanks very much, and I need help on both (a) and (b).


Answer



In order to apply the divisibility rule, you have to distinguish between odd and even positioned digits. (It doesn't matter how you count.)



Example:
In 77 the first position is odd, the second even, so you would have to calculate $7-7=0$, which is divisible by 11.



Now it should be easy for you to understand what you try to prove in (b): If a,b,c are three digits abc is the number $100a+10b+c$. You know what it means to say that this number is divisible by 11. You have to prove that $$11\vert (a+c)-b \Leftrightarrow 11\vert 100a+10b+c$$ or with modular arithmetic
$$ (a+c)-b \equiv 0 \pmod{11}\Leftrightarrow 100a+10b+c\equiv 0 \pmod {11}\; .$$

I don't want to spoil the fun so I leave it there.



P.S. Sorry, I hadn't noticed the answer posted in the meantime.


Sum of series $frac {4}{10}+frac {4cdot7}{10cdot20}+ frac {4cdot7cdot10}{10cdot20cdot30}+cdots$

What is the sum of the series



$$\frac {4}{10}+\frac {4\cdot7}{10\cdot20}+ \frac {4\cdot7\cdot10}{10\cdot20\cdot30}+\cdots?$$



I know how to check if a series is convergent or not.Is there any technique to find out the sum of a series like this where each term increases by a pattern?

Thursday, September 26, 2019

calculus - Evaluating the integral $int_0^infty frac{sin x} x ,mathrm dx = frac pi 2$?



A famous exercise which one encounters while doing Complex Analysis (Residue theory) is to prove that the given integral:
$$\displaystyle\int_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$$



Well, can anyone prove this without using Residue theory. I actually thought of doing this:
$$\int_0^\infty \frac{\sin x} x \, dx = \lim_{t \to \infty} \int_0^t \frac{1}{t} \left( t - \frac{t^3}{3!} + \frac{t^5}{5!} + \cdots \right) \,\mathrm dt$$

but I don't see how $\pi$ comes here, since we need the answer to be equal to $\dfrac{\pi}{2}$.


Answer



Here's another way of finishing off Derek's argument. He proves
$$\int_0^{\pi/2}\frac{\sin(2n+1)x}{\sin x}dx=\frac\pi2.$$
Let
$$I_n=\int_0^{\pi/2}\frac{\sin(2n+1)x}{x}dx=
\int_0^{(2n+1)\pi/2}\frac{\sin x}{x}dx.$$
Let
$$D_n=\frac\pi2-I_n=\int_0^{\pi/2}f(x)\sin(2n+1)x\ dx$$
where

$$f(x)=\frac1{\sin x}-\frac1x.$$
We need the fact that if we define $f(0)=0$ then $f$ has a continuous
derivative on the interval $[0,\pi/2]$. Integration by parts yields
$$D_n=\frac1{2n+1}\int_0^{\pi/2}f'(x)\cos(2n+1)x\ dx=O(1/n).$$
Hence $I_n\to\pi/2$ and we conclude that
$$\int_0^\infty\frac{\sin x}{x}dx=\lim_{n\to\infty}I_n=\frac\pi2.$$


Wednesday, September 25, 2019

elementary number theory - Find a function $M$ such that $M(x)=1 forall xneq 0$ and $M(0)=0$

Find a function $F$ from $S*S$ to $\{0,1\}$ where $S$ is the set of first $12$ positive integers such that :



$$F(a,b) = \begin{cases}0 &, \text{for $b \ge a$}\\ 1 & \text{otherwise }. \end{cases}$$



My Attempt:




$$F(a,b)=\left\lfloor\frac{a+12}{b+12}\right\rfloor G(a,b)$$



Let $G(a,b)=M(a-b)$,
Now we have to find a function $M$ from $S \cup P\cup {0}$($P$ is the set of first twelve negaive integers) to $(1,0)$ such that $M(0)=0$ and $M(x) =1 \forall x>1$



Since the limit does not exist at $0$ ,therefore I can't use trig or exponential function s etc.



Any help in direction would be appreciated.




PS: keep it as simple as possible. I am willing to use $\mod,floor$ and $abs$ to construct $M$

Tuesday, September 24, 2019

linear algebra - Why is the left inverse of a matrix equal to the right inverse?

Given a square matrix $A$ that has full row rank we know that the matrix is invertible. So there is a matrix $B$ such that



$$
AB=1
$$




writing this in component notation,



$$
A_{ij}B_{jk}=\delta_{ik}
$$



Now, we tend to write $A^{-1}$ instead of $B$ but let's leave it like that for now.



My question is how can we show that $BA=1$? We mechanically jump to the conclusion that if the inverse exists, $AA^{-1}=A^{-1}A=1$ but how to show that? Equivalently why is the left inverse equal to the right inverse? It seems intuitively obvious!




Thanks a bunch, I appreciate.

Find a bijective function between two sets

I want to find a bijective function from $(\frac{1}{2},1]$ into $[0,1]$. So, What is a bijective function $f:(\frac{1}{2},1]\to[0,1]$?

convex analysis - "Norm of norms" is another norm?



Suppose that, for some finite-dimensional real vector space $\Bbb R^n$, that $n_1(v)$, $n_2(v)$, ..., $n_k(v)$ are a set of norms on the space.



Given some $v$, then, we can look at the "vector of norms", which I will denote $v_n = (n_1(v), n_2(v), ..., n_k(v))$.




We can then look at norms on this vector of norms. For example, we could take the $\ell_1$ norm of the vector, which would be the sum of norms. It is easy to see that this will also be a norm.




  • Question 1: Is any norm on this "vector of norms" also a norm?

  • Question 2: Likewise, if we replace with seminorms, is any seminorm on the "vector of seminorms" also a seminorm?

  • Question 3: If not, for what norms do these things hold? (Do they at least hold for $\ell_p$ norms on the vector of norms?)



It is easy to see that you get homogeneity and positive-semidefiniteness, so the question is really about convexity. Does taking a "norm of norms" preserve convexity? Equivalently, does taking a norm of convex functions preserve convexity, or does taking a strictly increasing multivariate convex function of multiple convex functions preserve convexity?




EDIT - as per the answer from "mihaild" below, this isn't true for general norms, but would still like to know when it is true (in particular if it's true for $\ell_p$ norms without changing the basis).


Answer



At least for the first (and so for the second) question the answer is "no".



Take two norms $\|\cdot\|_1$ and $\|\cdot\|_2$ and two vectors $x$, $y$ such that $\|x\|_1 = \|x\|_2 = \|y\|_1 = \|y\|_2 = 1$, $\|x + y\|_1 \approx 2$, $\|x + y\|_2 \approx 0$.



Let $n(a, b) = \max (\frac{1}{3} |a + b|, |a - b|)$ (equal to $l_\infty$ norm in some scaled and rotated basis).



Then $n(\|x + y\|_1, \|x + y\|_2) \approx n(2, 0) = 4$, but $n(\|x\|_1, \|x\|_2) + n(\|y\|_1, \|y\|_2) = 2\cdot n(1, 1) = \frac{4}{3} < 4$.




For when it holds - at least if $n$ is such that for any $a_1 > 0, a_2 > 0, \ldots a_k > 0$ and $q_i \in [-a_i, a_i]$ we have $n(q_1, \ldots, q_n) \leqslant n(a_1, \ldots, a_n)$, then it holds: $$n(\|x + y\|_1, \ldots, \|x + y\|_n) = \\
n(\|x\|_1 + (\|x + y\|_1 - \|x\|_1), \ldots, \|x\|_n + (\|x + y\|_n - \|x\|_n)) \leqslant\\
n(\|x\|_1, \ldots, \|x\|_n) + n(\|x + y\|_1 - \|x\|_1, \ldots, \|x + y\|_n - \|x\|_n)
$$

If $a_i = \|y\|_i$ and $q_i = \|x + y\|_i - \|x\|_i$, then we have
$$n(\|x\|_1, \ldots, \|x\|_n) + n(\|x + y\|_1 - \|x\|_1, \ldots, \|x + y\|_n - \|x\|_n) \leqslant\\
n(\|x\|_1, \ldots, \|x\|_n) + n(\|y\|_1, \ldots, \|y\|_n)
$$




It holds at least for all $l_p$ norms. I think it is equal to unit ball defined by $n$ to be contained in hypercube bounded by hyperplanes $x_i = \pm p_i$, where $p_i$ is such that $n(0, 0, \ldots, p_i, \ldots, 0) = 1$.



This condition if definitely not necessary: for example, if all $\|\cdot\|_i$ coincide, then any $n$ will work.


Sum of series of the form $sum frac{(-1)^{m+1}}{2m+1}sin(frac{pi}{2}(2m+1)x)$

I would like to calculate the sum of the series:

\begin{equation}
\sum_{m=M+1}^{\infty}\frac{(-1)^{m+1}}{2m+1}\sin((2m+1)\frac{\pi}{2}x)
\end{equation}
where M is big and finite.






I searched on the books and found this sum:
\begin{equation}
\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{2k-1}\sin((2k-1)x)=\frac{1}{2}\ln\tan(\frac{\pi}{4}+\frac{x}{2})

\end{equation}
Now I try to put my question in the above form. Let m-M=n and m=n+M
\begin{equation}
\sum_{n=1}^{\infty} \frac{(-1)^{n+M+1}}{2(n+M)+1}\sin(2(n+M)+1)\frac{\pi}{2}x)
\end{equation}
I do not know how to proceed further.

Monday, September 23, 2019

real analysis - Challenging integral: Evaluate $int_0^1frac{ln^3(1-x)operatorname{Li}_3(x)}{x}dx$

How to evaluate $$I=\int_0^1\frac{\ln^3(1-x)\operatorname{Li}_3(x)}{x}dx\ ?$$



I came across this integral $I$ while I was trying to compute two advanced sums of weight 7. The problem with my approach is that when I tried to evaluate $I_5$

(shown below), the main integral $I$ appeared there which cancels out from both sides, so any idea how to evaluate $I_5$ or $I$?



Thanks.



Here is my trial:



Using the two generalized integral expressions of the polylogrithmic function which can be found in the book (Almost) Impossible Integrals, Sums and series page 4.



$$\int_0^1\frac{x\ln^n(u)}{1-xu}du=(-1)^n n!\operatorname{Li}_{n+1}
(x)\Longrightarrow \operatorname{Li}_{3}(x)=\frac12\int_0^1\frac{x\ln^2(u)}{1-xu}du\tag{1}$$




$$\small{u\int_0^1\frac{\ln^n(x)}{1-u+ux}dx=(-1)^{n-1}n!\operatorname{Li}_{n+1}\left(\frac{u}{u-1}\right)\Longrightarrow\int_0^1\frac{\ln^3x}{1-u+ux}dx=\frac6u\operatorname{Li}_{3}\left(\frac{u}{u-1}\right)}\tag{2}$$



We have



\begin{align}
I&=\int_0^1\frac{\ln^3(1-x)\operatorname{Li}_3(x)}{x}dx\overset{\text{use} (1)}{=}\frac12\int_0^1\frac{\ln^3(1-x)}{x}\left(\int_0^1\frac{x\ln^2u}{1-xu}du\right)dx\\
&=\frac12\int_0^1\ln^2u\left(\frac{\ln^3(1-x)}{1-xu}dx\right)\ du\overset{1-x\ \mapsto\ x}{=}\frac12\int_0^1\ln^2u\left(\int_0^1\frac{\ln^3x}{1-u+ux}dx\right)\ du\\
&\overset{\text{use}\ (2)}{=}3\int_0^1\frac{\ln^2u}{u}\operatorname{Li}_4\left(\frac{u}{u-1}\right)du\overset{IBP}{=}-\int_0^1\frac{\ln^3u}{u(1-u)}\operatorname{Li}_3\left(\frac{u}{u-1}\right)du
\end{align}




Now we need the trilogarithmic identity:



$$\operatorname{Li}_3\left(\frac{x-1}{x}\right)=\zeta(2)\ln x-\frac12\ln^2x\ln(1-x)+\frac16\ln^3x-\operatorname{Li}_3(1-x)-\operatorname{Li}_3(x)+\zeta(3)$$



set $1-x=u$ to get



$$\small{\operatorname{Li}_3\left(\frac{u}{u-1}\right)=\zeta(2)\ln(1-u)-\frac12\ln^2(1-u)\ln u+\frac16\ln^3(1-u)-\operatorname{Li}_3(u)-\operatorname{Li}_3(1-u)+\zeta(3)}$$



Going back to our integral

\begin{align}
I&=\small{-\int_0^1\frac{\ln^3u}{u(1-u)}\left(\zeta(2)\ln(1-u)-\frac12\ln^2(1-u)\ln x+\frac16\ln^3(1-u)-\operatorname{Li}_3(u)-\operatorname{Li}_3(1-u)+\zeta(3)\right)du}\\
&=-\zeta(2)\underbrace{\int_0^1\frac{\ln^3u\ln(1-u)}{u(1-u)}du}_{\Large I_1}+\frac12\underbrace{\int_0^1\frac{\ln^4u\ln^2(1-u)}{u(1-u)}du}_{\Large I_2}-\frac16\underbrace{\int_0^1\frac{\ln^3u\ln^3(1-u)}{u(1-u)}du}_{\Large I_3}\\
&\quad+\underbrace{\int_0^1\frac{\ln^3u\operatorname{Li}_3(u)}{u(1-u)}\ du}_{\Large I_4}+\underbrace{\int_0^1\frac{\ln^3u}{u(1-u)}\left(\operatorname{Li}_3(1-u)-\zeta(3)\right)du}_{\Large I_5}
\end{align}






\begin{align}
I_1=\int_0^1\frac{\ln^3u\ln(1-u)}{u(1-u)}du=-\sum_{n=1}^\infty H_n\int_0^1 u^{n-1}\ln^3udu=6\sum_{n=1}^\infty\frac{H_n}{n^4}

\end{align}

.






\begin{align}
I_2&=\int_0^1\frac{\ln^4u\ln^2(1-u)}{u(1-u)}du=\sum_{n=1}^\infty\left(H_n^2-H_n^{(2)}\right)\int_0^1 u^{n-1}\ln^4udu\\
&=24\sum_{n=1}^\infty\frac{H_n^2-H_n^{(2)}}{n^5}=24\sum_{n=1}^\infty\frac{H_n^2}{n^5}-24\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^5}
\end{align}







\begin{align}
I_3&=\int_0^1\frac{\ln^3u\ln^3(1-u)}{u(1-u)}du=\int_0^1\frac{\ln^3u\ln^3(1-u)}{u}du+\underbrace{\int_0^1\frac{\ln^3u\ln^3(1-u)}{1-u}du}_{1-x\ \mapsto\ x}\\
&=2\int_0^1\frac{\ln^3u\ln^3(1-u)}{u}\ du\overset{IBP}{=}\frac32\int_0^1\frac{\ln^4u\ln^2(1-u)}{1-u}du\\
&=\frac32\sum_{n=1}^\infty\left(H_n^2-H_n^{(2)}\right)\int_0^1 u^n\ln^4udu, \quad \text{reindex}\\
&=\frac32\sum_{n=1}^\infty\left(H_n^2-H_n^{(2)}-\frac{2H_n}{n}+\frac2{n^2}\right)\int_0^1 u^{n-1}\ln^4u du\\
&=\frac32\sum_{n=1}^\infty\left(H_n^2-H_n^{(2)}-\frac{2H_n}{n}+\frac2{n^2}\right)\left(\frac{24}{n^5}\right)\\
&=36\sum_{n=1}^\infty\frac{H_n^2}{n^5}-36\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^5}-72\sum_{n=1}^\infty\frac{H_n}{n^6}+72\zeta(7)
\end{align}


.






\begin{align}
I_4&=\int_0^1\frac{\ln^3u\operatorname{Li}_3(u)}{u(1-u)}du=\sum_{n=1}^\infty H_n^{(3)}\int_0^1 u^{n-1}\ln^3u du=-6\sum_{n=1}^\infty\frac{H_n^{(3)}}{n^4}
\end{align}







\begin{align}
I_5&=\int_0^1\frac{\ln^3u}{u(1-u)}\left(\operatorname{Li}_3(1-u)-\zeta(3)\right)du\\
&=\underbrace{\int_0^1\frac{\ln^3u}{u}\left(\operatorname{Li}_3(1-u)-\zeta(3)\right)du}_{IBP}+\underbrace{\int_0^1\frac{\ln^3u}{1-u}\left(\operatorname{Li}_3(1-u)-\zeta(3)\right)\ du}_{1-u\ \mapsto\ u}\\
&=\frac14\int_0^1\frac{\ln^4u\operatorname{Li}_2(1-u)}{1-u}du+\underbrace{\int_0^1\frac{\ln^3(1-u)\operatorname{Li}_3(u)}{u}du}_{\large \text{our main integral}}-\zeta(3)\int_0^1\frac{\ln^3u}{1-u}du\\
&=\frac14\int_0^1\frac{\ln^4u\operatorname{Li}_2(1-u)}{1-u}du+I+6\zeta(3)\zeta(4)
\end{align}



In my solution here I came across the remaining integral and here is the result:



$$\frac14\int_0^1\frac{\ln^4u\operatorname{Li}_2(1-u)}{1-u}du=6\zeta(2)\zeta(5)+36\zeta(7)-30\sum_{n=1}^\infty\frac{H_n}{n^6}-6\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^5}$$




Then



$$I_5=I+6\zeta(3)\zeta(4)+6\zeta(2)\zeta(5)+36\zeta(7)-30\sum_{n=1}^\infty\frac{H_n}{n^6}-6\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^5}$$
.






Note: We can not use the two sums $\sum_{n=1}^\infty\frac{H_n^3}{n^4}$ and $\sum_{n=1}^\infty\frac{H_nH_n^{(2)}} {n^4}$ in our solution because the integral $I$ is the key to evaluate these two sums.

number theory - Find the least non negative residues mod 7,11 and 13 of 12345678.



Find the least non negative residues mod 7,11 and 13 of 12345678.



my attempt: A number N is congruent modulo 7, 11, or 13, to the alternating sum of its digits in base 1000. (For example, 123456789 ≡ 789 − 456 + 123 ≡ 456 mod 7, 11, or 13.)


Answer




What I can do if I have understood well the question is the clear



$$12345678=949667\cdot13+\color{red}7=1122334\cdot11+\color{red}4=1763668\cdot7+\color{red}2$$ where the asked residues are in red.


elementary set theory - Is $aleph_0^{aleph_0}$ smaller than or equal to $2^{aleph_0}$?








Is $\aleph_0^{\aleph_0}$ smaller than or equal to $2^{\aleph_0}$?



I thought I saw this kind of statement somewhere, but I do not remember it.



Can anyone show me the proof of it?




Thanks.

Sunday, September 22, 2019

sequences and series - How can I evaluate $sum_{n=1}^{infty} n^{3}x^{n-1}$

How do i evaluate :


$$\sum_{n=1}^{\infty} n^{3}x^{n-1}$$


The answer is supposed to be: (according to wolfram alpha)


$$ \frac{x^2+4x+1}{(x-1)^4} $$


I have only learned to to this for simpler geometric sums.

Neighborhoods in real analysis problem



Show that if $a,b \; \epsilon\; \mathbb R$, then there exists $\varepsilon$ neighborhoods U of $a$ and $V$ of $b$ such that $U \cap V = \varnothing $.



I have already defined the sets $V_{\varepsilon}(a):= \{x\epsilon R: |x-a| < \varepsilon\}$ and $U_{\varepsilon}(b):= \{y\epsilon R: |y-b| < \varepsilon\}$ but I don't know how to proceed further. Any help would be appreciated.


Answer



Draw a picture. If $\epsilon=\frac{|b-a|}{3}$, it is clear that the intervals $(a-\epsilon, a+\epsilon)$ and $(b-\epsilon, b+\epsilon)$ have no point in common.




If we want to be very formal, suppose to the contrary that $|b-x|\lt \epsilon$ and $|x-a|\lt \epsilon$. Then by the Triangle Inequality
$$|b-a|\le |b-x|+|x-a|\lt 2\epsilon \lt |b-a|,$$
which is impossible.


Sequences (Mathematical Induction)

Can any one help me with this.

We have
$$
U_n=\frac{n^3}{n^4+1}+\frac{n^3}{n^4+2}...+\frac{n^3}{n^4+n}

$$
How to prove that, for all $n$:
$$
\frac{n^4}{n^4+n}\le U_n\le \frac{n^4}{n^4+1}
$$



and what the limit of the sequence ?



I've proved that for $U_0$ but I couldn't prove that for $n+1.$

Thanks too much

elementary number theory - Prove $gcd(k, l) = d Rightarrow gcd(2^k - 1, 2^l - 1) = 2^d - 1$

This is a problem for a graduate level discrete math class that I'm hoping to take next year (as a senior undergrad). The problem is as stated in the title:



Given that $\gcd(k, l) = d$, prove that $\gcd(2^k - 1, 2^l - 1) = 2^d - 1$.



The problem also says "hint: use Euclid's lemma," although I'm not sure what part of the problem should be applied to. I've been thinking about it for a few days and I'm completely stumped.


I'm not really even sure how to show that it divides either $2^k - 1$ or $2^l - 1$. From the given, we know that $\exists c: dc = k$. We need to show that $\exists c': (2^d - 1)c' = 2^k - 1$. Obviously $c' = 2^c$ gives you $(2^d - 1)c' = 2^k - 2^c$, but I can't figure out how to get rid of the extra terms that the $- 1$ brings in in various places.



From Euclid's lemma on the left side, you know $\exists i: di = k - l$, and applying it on the right side, you know it suffices to show that $\gcd(2^k - 2^l, 2^l - 1) = 2^d - 1$. And by Bezout's identity, it's enough to show that $2^d - 1$ can be written as a linear combination of either of those things.


Can anyone give me a hint?

limits - How to determine if a $limlimits_{n rightarrow infty}{(1+{ixover n})^n}$ would be complex




Question



Recently, I have been looking at complex limits, The most famous being $e^{ix}$=$\lim\limits_{n \rightarrow \infty}{(1+{ix\over n})^n}$. An example would be that when $x = \pi$ we know that the answer will be -1. But how do you determine this due to the fact that you can always $+1$ which will determine the outcome.




I am fully aware that you are able to do this via the $i\cdot \sin(a \ln b) +\cos(a\ln b)$ however, how can you prove this via a limit, because if you test it on a calculator, most of the time you'll end up with some imaginary part.



Specifically I have been looking at the representation of $\sin x={ie^{-ix}\over 2}-{ie^{ix}\over 2}$. Everyone would be safe to assume that $\sin x$ is always real, but when you apply a limit then how can you determine if it is only real or imaginary and real?


Answer



Using the polar form, you can rewrite the expression as $$\left(\sqrt{1+\frac{x^2}{n^2}}\right)^n\text{cis}\left(n\arctan\frac xn\right).$$



It tends to $1\cdot\text{cis }x$.


Saturday, September 21, 2019

algebra precalculus - Easy way to solve $w^2 = -15 + 8i$



Solve $w^2=−15+8i$, where $w$ is complex.



Normally, I would convert this into polar coordinates, but the problem is that is too slow.


What is another alternative?

calculus - Study of the convergence of a sequence with repeated radicals


Consider the sequence $$ a_n = \sqrt {1!\sqrt {2!\cdots\sqrt {n!} } }, \quad n\in\mathbb N. $$ Does this sequence converge?


Clearly, $\{a_n\}_{n\in\mathbb N}$ is monotonically increasing.


Therefore, there are two possibilities:


Either the sequence goes to infinity or it is bounded and therefore, converges to a finite limit.


Which of the two holds?


Answer



Note that \begin{align} \log a_n&=\log \sqrt{1!\sqrt{2!\cdots\sqrt{n!}}}=\frac{1}{2}\log 1! +\frac{1}{4}\log 2!+\cdots+\frac{1}{2^n}\log n! \\ &=\sum_{k=1}^n \frac{\log (k!)}{2^k}=\sum_{k=1}^n\frac{1}{2^k}\sum_{j=1}^k\log j= \sum_{k=1}^n \log k \Big(\sum_{j=k}^n \frac{1}{2^j}\Big). \end{align} Therefore, the sequence $\log a_n$, which is increasing, converges to $$ \log a_n=\sum_{k=1}^n \log k \Big(\sum_{j=k}^n \frac{1}{2^j}\Big)\longrightarrow\sum_{k=1}^\infty\frac{\log k}{2^{k-1}}=b<\infty. $$ Convergence can be established using for example the ratio test.


Thus $$ a_n\to \mathrm{e}^b=\exp\left(\sum_{k=1}^\infty\frac{\log k}{2^{k-1}}\right)=\prod_{k=1}^\infty k^{2^{-k+1}}. $$



Note. I am wondering whether $\sum_{k=1}^\infty\frac{\log k}{2^{k-1}}$ can be expressed in terms of some known constants.


linear algebra - Efficient method for computing the properties of a block anti-diagonal matrix




EDIT: the title and content of this question were reformulated for the sake of clarity and to avoid diverting attention from the main issue.



I was trying to answer a question about a system of linear differential equations and it was necessary to find the eigenvalues, eigenvectors and generalized eigenvectors for the following matrix:
$$
M=\begin{bmatrix}0&E_{\times}\\B_{\times}&0\end{bmatrix},
$$
where $E=[E_1,E_2,E_3]^T,B=[B_1,B_2,B_3]$, $E^TB=0$ and $E_\times,B_\times$ are the representation of vectors through antisymmetric matrices:
$$
E_{\times}= \begin{pmatrix} 0 & -E_{3} & E_{2}\\ E_{3}& 0 & -E_{1}\\ -E_{2} & E_{1} & 0 \\ \end{pmatrix},\qquad B_{\times}= \begin{pmatrix} 0 & -B_{3} & B_{2}\\ B_{3}& 0 & -B_{1}\\ -B_{2} & B_{1} & 0 \\ \end{pmatrix}.
$$

With WolframAlpha it is possible to see that all eigenvalues are zero and the kernel has dimension 2. I have tried to find some method to obtain these results without resort to software or brute force (solve a linear system step by step), but I could not.



Then my question is the following: is it possible to determine the null space, eigenvalues and generalized eigenvectors for matrix $M$ efficiently without resort to software or brute force?



It is always possible to compute them the hard way: computing the characteristic polynomial $\det(M-\lambda 1)$, finding its roots, substituting in $(M-\lambda 1)v=0$ to find eigenvectors $v$ and using them to determine the generalized eigenvectors. What I am looking for is a answer that computes the null space, eigenvalues and (generalized) eigenvectors making use of the properties of matrix $M$, not brute force.


Answer



We assume that the system $\{E,B\}$ is linearly independent.



Note that (double cross product) $E_xB_xX=BE^TX-X(E^TB)=BE^TX$, that is, $E_xB_x=BE^T$ and, in the same way, $B_xE_x=EB^T$.




$M[X,Y]^T=[E\times Y,B\times X]^T$ and a basis of $\ker(M)$ is $\{[B,0]^T,[0,E]^T\}$.



Moreover $\det(M-\lambda I_6)=\det(\lambda^2I-BE^T)$; $BE^T$ has rank $1$ and trace $0$; then it is nilpotent and $M$ also.



About the generalized eigenvectors, $M^2=diag(BE^T,EB^T)$ and $M^2[X,Y]^T=0$ can be written $BE^TX=0,EB^TY=0$, that is, $X\in E^{\perp},Y\in B^{\perp}$, that implies that $dim(\ker(M^2))=4$.



It is not difficult to see that $M^3=0$ and we are done.


calculus - How to calculate $ int frac{sin^{6}(x)}{sin^{6}(x) + cos^{6}(x)} dx? $



How to calculate
$$ \int \frac{\sin^{6}(x)}{\sin^{6}(x) + \cos^{6}(x)} dx? $$






I already know one possible way, that is by :
$$ \int \frac{\sin^{6}(x)}{\sin^{6}(x) + \cos^{6}(x)} dx = \int 1 - \frac{\cos^{6}(x)}{\sin^{6}(x) + \cos^{6}(x)} dx $$
$$= x-

\int \frac{1}{1+\tan^{6}(x)} dx $$
Then letting $u=\tan(x)$, we must solve
$$\int \frac{1}{(1+u^{6})(1+u^{2})} du $$
We can reduce the denominator and solve it using Partial Fraction technique. This is quite tedious, I wonder if there is a better approach.






Using same approach, for simpler problem, I get
$$\int \frac{\sin^{3}(x)}{\sin^{3}(x)+\cos^{3}(x)} dx = \frac{x}{2} - \frac{\ln(1+\tan(x))}{6} + \frac{\ln(\tan^{2}(x)- \tan(x)+1)}{3} - \frac{\ln(\sec(x))}{2} + C$$


Answer




Let us take:
$$I=\int \frac{\sin^{6}(x)}{\sin^{6}(x) + \cos^{6}(x)} dx$$
then
$$I=\int \frac{-\cos^{6}(x)}{\sin^{6}(x) + \cos^{6}(x)} dx+x$$
giving$$2I=\int \frac{\sin^{6}(x)-\cos^{6}(x)}{\sin^{6}(x) + \cos^{6}(x)} dx+x$$
This can be written as (using identities like $a^3-b^3$ and $a^3+b^3$)
$$2I= \int\frac{(\sin^2(x)-\cos^2(x)(1-\sin^2(x)\cos^2(x))}{(1-\sqrt3\sin(x)\cos(x)(1+\sqrt3\sin(x)\cos(x))}dx+x$$



$$
2I=\frac{1}{2}\left(\int\frac{(\sin^2(x)-\cos^2(x)(1-\sin^2(x)\cos^2(x))}{(1+\sqrt3\sin(x)\cos(x))}

+\int\frac{(\sin^2(x)-\cos^2(x)(1-\sin^2(x)\cos^2(x))}{(1-\sqrt3\sin(x)\cos(x))}\right)+x
$$
Evaluating the integrals separately by using $u=1+\sqrt3\sin(x)\cos(x)$
for first one gives



$$\int\frac{(\sin^2(x)-\cos^2(x)(1-\sin^2(x)\cos^2(x))}{(1+\sqrt3\sin(x)\cos(x))}=\frac{1}{\sqrt3}\int\frac{(\sin(x)\cos(x)-1)(\sin(x)\cos(x)+1)}{u}du$$
Now use $\sin(x)\cos(x)=\frac{u-1}{\sqrt3}$
which will evaluate the integral as $\frac{u^2}{6\sqrt3}-\frac{2u}{3\sqrt3}-\frac{2\ln(u)}{3\sqrt3}$...Similar approach for other with $v=1-\sqrt3\sin(x)\cos(x)$



The final value is

$$I=\frac{x}{2}-\frac{\sin(x)\cos(x)}{6}+\frac{\ln(1-\sqrt3\sin(x)\cos(x))}{6\sqrt3}-\frac{\ln(1+\sqrt3\sin(x)\cos(x))}{6\sqrt3}+C$$


Friday, September 20, 2019

algebra precalculus - Why $sqrt{-1 times {-1}} neq sqrt{-1}^2$?



I know there must be something unmathematical in the following but I don't know where it is:



\begin{align}

\sqrt{-1} &= i \\ \\
\frac1{\sqrt{-1}} &= \frac1i \\ \\
\frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\ \\
\sqrt{\frac1{-1}} &= \frac1i \\ \\
\sqrt{\frac{-1}1} &= \frac1i \\ \\
\sqrt{-1} &= \frac1i \\ \\
i &= \frac1i \\ \\
i^2 &= 1 \\ \\
-1 &= 1 \quad !!!
\end{align}



Answer



Between your third and fourth lines, you use $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$. This is only (guaranteed to be) true when $a\ge 0$ and $b>0$.



edit: As pointed out in the comments, what I meant was that the identity $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$ has domain $a\ge 0$ and $b>0$. Outside that domain, applying the identity is inappropriate, whether or not it "works."



In general (and this is the crux of most "fake" proofs involving square roots of negative numbers), $\sqrt{x}$ where $x$ is a negative real number ($x<0$) must first be rewritten as $i\sqrt{|x|}$ before any other algebraic manipulations can be applied (because the identities relating to manipulation of square roots [perhaps exponentiation with non-integer exponents in general] require nonnegative numbers).



This similar question, focused on $-1=i^2=(\sqrt{-1})^2=\sqrt{-1}\sqrt{-1}\overset{!}{=}\sqrt{-1\cdot-1}=\sqrt{1}=1$, is using the similar identity $\sqrt{a}\sqrt{b}=\sqrt{ab}$, which has domain $a\ge 0$ and $b\ge 0$, so applying it when $a=b=-1$ is invalid.


algebra precalculus - An incorrect method to sum the first $n$ squares which nevertheless works


Start with the identity


$\sum_{i=1}^n i^3 = \left( \sum_{i = 1}^n i \right)^2 = \left(\frac{n(n+1)}{2}\right)^2$.


Differentiate the left-most term with respect to $i$ to get


$\frac{d}{di} \sum_{i=1}^n i^3 = 3 \sum_{i = 1}^n i^2$.


Differentiate the right-most term with respect to $n$ to get


$\frac{d}{dn} \left(\frac{n(n+1)}{2}\right)^2 = \frac{1}{2}n(n+1)(2n+1)$.



Equate the derivatives, obtaining


$\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$,


which is known to be correct.


Is there any neat reason why this method happens to get lucky and work for this case?


Answer



Let $f_k(n)=\sum_{i=1}^n i^k$. We all know that $f_k$ is actually a polynomial of degree $k+1$. Also $f_k$ can be characterised by the two conditions: $$f_k(x)-f_k(x-1)=x^k$$ and $$f_k(0)=0.$$ Differentiating the first condition gives $$f_k'(x)-f_k'(x-1)=k x^{k-1}.$$ Therefore the polynomial $(1/k)f_k'$ satisfies the first of the two conditions that $f_{k-1}$ does. But it may not satisfy the second. But then $(1/k)(f_k'(x)-f_k'(0))$ does. So $$f_{k-1}(x)=\frac{f_k'(x)-f_k'(0)}k.$$


The mysterious numbers $f_k'(0)$ are related to the Bernoulli numbers, and when $k\ge3$ is odd they obligingly vanish...


infinity - Indeterminate form as a series

We know that $0 \times \infty$ is an indeterminate form. However, is it equivalent to $0 + 0 + 0 + \cdots$? If yes, why we do not consider $\displaystyle \sum_{n = 0}^\infty 0$ an indeterminate form?



--EDIT--



We can also write the sum for any constant $k$ from $0$ to $n$ as $k(n+1)$




So, $\displaystyle \sum_{n=0}^\infty 0 = 0 \times (\infty + 1)$ which is an IF.



Why does wolframalpha say that it is convergent?



Thank you,

algebra precalculus - What is $sqrt{-4}sqrt{-9}$?




I assumed that since $a^c \cdot b^c = (ab)^{c}$, then something like $\sqrt{-4} \cdot \sqrt{-9}$ would be $\sqrt{-4 \cdot -9} = \sqrt{36} = \pm 6$ but according to Wolfram Alpha, it's $-6$?


Answer



The property $a^c \cdot b^c = (ab)^{c}$ that you mention only holds for integer exponents and nonzero bases. Since $\sqrt{-4} = (-4)^{1/2}$, you cannot use this property here.



Instead, use imaginary numbers to evaluate your expression:



$$

\begin{align*}
\sqrt{-4} \cdot \sqrt{-9} &= (2i)(3i) \\
&= 6i^2 \\
&= \boxed{-6}
\end{align*}
$$


Radius of convergence of the series $ sumlimits_{n=1}^infty (-1)^nfrac{1cdot3cdot5cdots(2n-1)}{3cdot6cdot9cdots(3n)}x^n$



How do I find the radius of convergence for this series:



$$ \sum_{n=1}^\infty (-1)^n\dfrac{1\cdot3\cdot5\cdot\cdot\cdot(2n-1)}{3\cdot6\cdot9\cdot\cdot\cdot(3n)}x^n $$




Treating it as an alternating series, I got
$$x< \dfrac{n+1}{2n+1}$$



And absolute convergence tests yield
$$-\dfrac{1}{2}

I feel like it's simpler than I expect but I just can't get it. How do I do this?



Answer in book: $\dfrac{3}{2}$


Answer




The ratio test allows to determine the radius of convergence.



For $n \in \mathbb{N}^{\ast}$, let :



$$ a_{n} = (-1)^{n}\frac{1 \times 3 \times \ldots \times (2n-1)}{3 \times 6 \times \ldots \times (3n)}. $$



Then,



$$ \begin{align*}
\frac{\vert a_{n+1} \vert}{\vert a_{n} \vert} &= {} \frac{1 \times 3 \times \ldots \times (2n-1) \times (2n+1)}{3 \times 6 \times \ldots (3n) \times (3n+3)} \times \frac{3 \times 6 \times \ldots \times (3n)}{1 \times 3 \times \ldots \times (2n-1)} \\[2mm]

&= \frac{2n+1}{3n+3} \; \mathop{\longrightarrow} \limits_{n \to +\infty} \; \frac{2}{3}.
\end{align*}
$$



Since the ratio $\displaystyle \frac{\vert a_{n+1} \vert}{\vert a_{n} \vert}$ converges to $\displaystyle \frac{2}{3}$, we can conclude that $R = \displaystyle \frac{3}{2}$.


multivariable calculus - Deriving a high ordered Euler-Lagrange equation.


I've been able to derive the Euler-Lagrange equation for $$\int_a^b F(x,y,y')dx$$ relatively easily by using the total derivative and integration by parts. However, I was unable to apply the same methods whilst trying to derive the Euler-Lagrange for $$\int_a^b F(x,y,y',y'')dx$$ A proof would be great, but a proper name for this type of equation would also be appreciated (without a term I've found scouring the internet for a Euler-Lagrange of this form to be quite difficult).


Answer



Start by setting up a variation $y_\epsilon (x) = y(x) + \epsilon h(x)$ where $\epsilon$ is some small real number and $h(x)$ is zero on the boundaries. You now have a functional of the form


$$\int_a^b f(x,y_\epsilon,y'_\epsilon,y''_\epsilon)dx.$$


Differentiate with respect to $\epsilon$ and use the Leibniz rule.



$$\frac{d}{d \epsilon} \int_a^b f(x,y_\epsilon,y'_\epsilon,y''_\epsilon)dx = \int_a^b \frac{\partial}{\partial \epsilon} f(x,y_\epsilon,y'_\epsilon,y''_\epsilon)dx .$$


Note that


$$ \frac{\partial}{\partial \epsilon} f(x,y_\epsilon,y'_\epsilon,y''_\epsilon) = \frac{\partial f}{\partial x}\frac{\partial x}{\partial \epsilon} + \frac{\partial f}{\partial y_\epsilon}\frac{\partial y_\epsilon}{\partial \epsilon} + \frac{\partial f}{\partial y'_\epsilon}\frac{\partial y'_\epsilon}{\partial \epsilon} + \frac{\partial f}{\partial y''_\epsilon}\frac{\partial y''_\epsilon}{\partial \epsilon}$$


It follows that


$$ \frac{\partial}{\partial \epsilon} f(x,y_\epsilon,y'_\epsilon,y''_\epsilon) = \frac{\partial f}{\partial y_\epsilon}h(x) + \frac{\partial f}{\partial y'_\epsilon} h'(x) + \frac{\partial f}{\partial y''_\epsilon} h''(x)$$


Your functional now looks like


$$\int_a^b \left( \frac{\partial f}{\partial y_\epsilon}h(x) + \frac{\partial f}{\partial y'_\epsilon} h'(x) + \frac{\partial f}{\partial y''_\epsilon} h''(x) \right) dx.$$


Since you are looking for a stationary point, set it equal to zero and now evaluate it at $\epsilon = 0$.


$$0 = \int_a^b \left( \frac{\partial f}{\partial y}h(x) + \frac{\partial f}{\partial y'} h'(x) + \frac{\partial f}{\partial y''} h''(x) \right) dx.$$


Use integration by parts on the second and third terms. I will only perform it on the third term since you state you've derived the simpler case before. This term must be integrated by parts twice.



$$ \int_a^b \frac{\partial f}{\partial y''} h''(x) dx.$$


Choose $u= \frac{\partial f}{\partial y''}$ and $v'= h''(x) $ so $u' = \frac{d}{dx} \left( \frac{\partial f}{\partial y''} \right)$ and $v = h'(x)$.


The resulting integration by parts is


$$ \frac{\partial f}{\partial y''} h'(x) \bigg|_a^b - \int_a^b \frac{d}{dx} \left( \frac{\partial f}{\partial y''}\right) h'(x) dx$$


The first term is zero since $h(x)$ is zero on the boundaries. The second term must be integrated by parts again but now with


$$ u = \frac{d}{dx}\left( \frac{\partial f}{\partial y''} \right)$$


$$u' = \frac{d^2}{dx^2}\left( \frac{\partial f}{\partial y''} \right)$$


$$v = h(x)$$


$$v' = h'(x)$$


I will leave the substitutions to you. The functional now reads:



$$ 0= \int^a_b \left[ \frac{\partial f}{\partial y} - \frac{d}{dx} \left( \frac{\partial f}{\partial y'} \right) + \frac{d^2}{dx^2} \left( \frac{\partial f}{\partial y''} \right) \right] h(x) dx $$


This should work for any function $h(x)$ so your Euler-Lagrange equation is


$$ 0= \frac{\partial f}{\partial y} - \frac{d}{dx} \left( \frac{\partial f}{\partial y'} \right) + \frac{d^2}{dx^2} \left( \frac{\partial f}{\partial y''} \right) $$


Thursday, September 19, 2019

elementary number theory - Prove: $gcd(n^a-1,n^b-1)=n^{gcd(a,b)}-1$

I'm trying to prove the following statement:
$$\forall_{a,b\in\Bbb{N^{+}}}\gcd(n^a-1,n^b-1)=n^{\gcd(a,b)}-1$$



As for now I managed to prove that $n^{\gcd(a,b)}-1$ divdes $n^a-1$ and $n^b-1$:



Without loss of generality let $a>b$ (if $a=b$, the result is obvious), $n\neq1$ ($\gcd(0,0)$ makes no sense). Then if we let $a=kd, b=jd, d=\gcd(a,b)$, we see that for $n^{kd}-1$ we have $\frac{(n^d)^k-1}{n^d-1}=1+n^d+...+n^{d(k-1)}=C$ (it's a finite geometric series), so $n^a-1=(n^d-1)C$. Same works for $n^b-1$, so $n^d-1$ divides both of these numbers.




How to prove that $n^d-1$ not only divides both numbers, but is greatest common divisor?

sequences and series - Show that $S$ has the same cardinality with the set of real numbers.



Suppose that $a_n$ is a sequence with values in $\mathbb R$ which is not ultimately constant.



Let $S$ be the set of the subsequences of $a_n$.







Question: Show that $S$ has the same cardinality with the set of real numbers.






Attempt: First, I tried to write $S$ in my logic as $S=\{a_{f_n}\;|\; f_n \text{is monotone increasing}, f_n:\mathbb N\to\mathbb N \}$



If two sets have the same cardinality, there must be a bijective mapping between them. If I am able to show there are uncountable many increasing $f_n$, then I am done, but I couldnot. Besides, how can one prove this by using my logic or in other method?







Comment:
Moreover, I can understand intuitively that there cannot be countable many such subsequences since $a_n$ is not convergent, that is not going to the any number, changes everytime.


Answer



The $\{0,1\}$ sequences are not ultimately constant (if we forget countably many being constant from some term), hence
$\mathfrak{c}=2^{\aleph_0}\leq|{S}|\leq\mathfrak{c}^{\aleph_0}=\mathfrak{c}$


integration - Difficult trigonometric integral. [Solved]



It was originally asked here. This was also asked here.




I have faced some difficulties to do the following integral:




$$ I=\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta~\sin\theta\int_{0}^{\infty}dr~r^2\frac{3x^2y^2\cos(u r \sin\theta \cos\phi)\cos^2\theta}{(y^2\cos\phi+x^2\sin^2\phi)\sin^2\theta+x^2y^2\cos^2\theta}\mathrm e^{-\frac{r^2}{2}} \tag{1}, $$




where $x$, $y$, and $u$ are real positive constants. I tried at least two ways to solve this integral:





  • First attempt:



I began to solve the $r$ integral first. By using Mathematica, then



$$ I=\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta~\sin\theta\frac{3x^2y^2(1-u^2\sin^2\theta\cos^2\phi)\cos^2\theta}{(y^2\cos\phi+x^2\sin^2\phi)\sin^2\theta+x^2y^2\cos^2\theta}\mathrm e^{-\frac{u^2}{2}\sin^2\theta\cos^2\phi} \tag{2}. $$



After that, I looked for a solution for $\phi$ integral. My best attempt was:



$$ I_\phi(x,y,u,\theta)=\frac{2}{B}\left[B\left(\frac{1}{2}\right)F_1\left(\frac{1}{2},1,-;1;\nu,-\frac{a}{2}\right)-aB\left(\frac{1}{2}\right)F_1\left(\frac{3}{2},1,-;2;\nu,-\frac{a}{2}\right)\right], $$




where $B=x^2\sin^2\theta+x^2y^2\cos^2\theta$, $a=u^2\sin^2\theta$, and $\nu=\frac{x^2-y^2}{x^2+x^2y^2\cot^2\theta}$. In this way, the final results it's something like that:



$$ I= \int_{0}^{\pi} \mathrm d \theta~3x^2y^2\sin\theta \cos^2\theta~ I_\phi(x,y,u,\theta). \tag{3}. $$



Eq. $(3)$ cannot be further simplied in general and is the nal result.








  • Second attempt:



To avoid the hypergeometric function $F_1$, I tried to start with the $\phi$ integral. In this case, my initial problem is an integral something like that:



$$ \int_{0}^{2\pi} \mathrm d \phi \frac{\cos(A \cos\phi)}{a^2\cos^2\phi+b^2\sin^2\phi}. \tag{4} $$



This integral $(4)$ can be solved by series (see Vincent's answer and Jack's answer). However those solutions, at least for me, has not a closed form. This is my final step on this second attempt :(







What is the point? It turns out that someone has managed to solve the integral $(1)$, at least the integral in $r$ and $\phi$. The final resuls found by this person was:




$$ I_G=\frac{12 \pi x~y}{(1-x^2)^{3/2}}\int_{0}^{\sqrt{1-x^2}} \mathrm dk \frac{k^2 \exp\left(-\frac{u^2}{2}\frac{x^2k^2}{(1-x^2)(1-k^2)}\right)}{\sqrt{1-k^2}\sqrt{1-k^2\frac{1-y^2}{1-x^2}}}, $$




where, I belive, $k=\sqrt{1-x^2}\cos\theta$. As you can see in this following code performed in Mathematica



IG[x_, y_, u_] := 

Sqrt[Pi/2] NIntegrate[(12 Pi x y)/(1 - x^2)^(3/2)
(v^2 Exp[-(u^2 x^2 v^2)/(2 (1 - x^2) (1 - v^2))])/(Sqrt[1 - v^2] Sqrt[1 - v^2 (1 - y^2)/(1 - x^2)]), {v, 0, Sqrt[1 - x^2]}]
IG[.3, .4, 1]
** 4.53251 **

I[x_, y_, u_] :=
NIntegrate[(r^2 Sin[a] Cos[
u r Sin[a] Cos[b]] 3 x^2 y^2 Cos[a]^2 Exp[-r^2/
2])/((y^2 Cos[b]^2 + x^2 Sin[b]^2) Sin[a]^2 +
x^2 y^2 Cos[a]^2), {r, 0, Infinity}, {a, 0, Pi}, {b, 0, 2 Pi}]

I[.3, .4, 1]
** 4.53251 **


the integrals $I$ and $I_G$ are equals. Indeed, since that they emerge from the same physical problem.



So, my question is: what are the steps applied for that integral $I$ gives the integral $I_G$?







Edit



Since my question was not solved yet, I think it is because it is a tough question, I will show a particular case of the integral $I$, letting $u=0$. I hope with this help you help me.



In this case, the $r$ integral in $(1)$ is trivial and the integral takes the form:



$$ I_P=\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta~\sin\theta\frac{3x^2y^2\cos^2\theta}{(y^2\cos\phi+x^2\sin^2\phi)\sin^2\theta+x^2y^2\cos^2\theta}. \tag{5} $$



The $\phi$ integral can be integrated with the help of Eq. 3.642.1 in Gradstein and Ryzhik's tables of integrals. Thereby, the $I_P$ takes the for:




$$ I_P=3xy\int_{0}^{\pi}d\theta\frac{\sin\theta\cos^2\theta}{\sqrt{1+(x^2-1)\cos^2\theta}\sqrt{1+(y^2-1)\cos^2\theta}}. \tag{6}$$



Now the change of variable $k=\sqrt{1-x^2}\cos\theta$ bring expression $(6)$ to the form



$$ I_P= \frac{(const) x~y}{(1-x^2)^{3/2}}\int_{0}^{\sqrt{1-x^2}} \mathrm dk \frac{k^2}{\sqrt{1-k^2}\sqrt{1-k^2\frac{1-y^2}{1-x^2}}}. $$



Did you notice how $I_G$ and $I_P$ are similar? Do you think a similar approach can be applied to my original problem? Please, let me know.



Edit 2




The integral $(1)$ is also evaluated in Appendix A.4 of this thesis. However, there he used cylindrical symmetry.



Edit: ended



My bounty ended, and unfortunately, I don't have enough reputation to offer another one. My question was not solved. Perhaps to solve that it is necessary to make some physical consideration. Anyway, I thanks to all who helped me. If I can solve this, I put the solution here.











I've solved this problem applying the Schwinger proper-time substitution: $$\frac{1}{q^2}=\int_{0}^{\infty}\mathrm{d\xi}~\mathrm{e^{-q^2\xi}} $$



Answer



Here is an outline of the approach I have taken to solve this integral.



First rewrite the integral $(1)$ in Cartesian variables:



$$I=\int_{-\infty}^{\infty} \mathrm{d}^3v~ \frac{3x^2y^2v_z^2}{y^2v_x^2+x^2v_y^2+x^2y^2v_z^2}\cos(uv_x)\exp\left(-\frac{v_x^2}{2}-\frac{v_y^2}{2}-\frac{v_z^2}{2}\right). $$




Now use the following substitution



$$ \frac{1}{y^2v_x^2+x^2v_y^2+x^2y^2v_z^2}=\int_{0}^{\infty}d\tau~\mathrm{ e^{-(y^2v_x^2+x^2v_y^2+x^2y^2v_z^2)\tau}},$$



such that



$$ I=\int_{0}^{\infty}d\tau\int_{-\infty}^{\infty} \mathrm{d}^3v~3x^2y^2v_z^2\cos(uv_x) \mathrm{e^{-v_x^2(\tau y^2+1/2)-v_y^2(\tau x^2+1/2)-v_z^2(\tau x^2y^2+1/2)}}. $$



The $(v_x,v_y,v_z)$ integrals can be evaluated with the help of Mathematica. The results gives




$$ \int_{-\infty}^{\infty} \mathrm{d}^3v~3x^2y^2v_z^2\cos(uv_x) \mathrm{e^{-\alpha v_x^2-\beta v_y^2-\gamma v_z^2}}=\frac{3\pi^{3/2}}{2x^2y^2}\frac{\exp\left(-\frac{u^2}{4y^2}\frac{1}{\tau+1/2y^2}\right)}{\left(\tau+1/2x^2y^2\right)^{3/2}\left(\tau+1/2x^2\right)^{1/2}\left(\tau+1/2y^2\right)^{1/2}}.$$



Thereby,



$$ I= -\frac{3~\mathrm{const}}{x^2y^2~\mathrm{const}}\int_{0}^{\infty}\mathrm{d\tau}\frac{\exp\left(-\frac{u^2}{4y^2}\frac{1}{\tau+1/2y^2}\right)}{\left(\tau+1/2x^2y^2\right)^{3/2}\left(\tau+1/2x^2\right)^{1/2}\left(\tau+1/2y^2\right)^{1/2}}. $$



Now, performing the substitution $\tau=\frac{1-x^2}{2x^2y^2k^2}-\frac{1}{2x^2y^2}$ gives us




$$ I=(\mathrm{const})~\frac{3x~y}{(1-x^2)^{3/2}}\int_{0}^{\sqrt{1-x^2}}\mathrm{dk}\frac{k^2\exp\left(-\frac{u^2}{2}\frac{x^2k^2}{\left(1-x^2\right)\left(1-k^2\right)}\right)}{\sqrt{1-k^2}\sqrt{1-k^2\frac{1-y^2}{1-x^2}}}, $$





which is the desired integral unless of a constant. ;)


Wednesday, September 18, 2019

elementary number theory - If $gcd(a, b) = 1$, then $gcd(ab, c) = gcd(a, c) cdotgcd(b, c)$



How can I prove that if $\gcd(a, b) = 1$, then
$\gcd(ab, c) = \gcd(a, c) \times \gcd(b, c)$?



By eea there exists $ax+by=1$ from $\gcd(a,b)=1$ so a and be are co-primes there also exists $dk=a$ and $dj= b$ where $d=\gcd(a,b)=1$ this is all the information I have gathered from the question but I dont know how to approach and solve it. Can anyone help explain to me how to arrive at the answer? Thanks!


Answer



Without using primes.
We show that $(ab,c) \mid (a,c)(b,c)$ and that $(a,c)(b,c)\mid (ab,c) $.




We have $ax+by=1$ multiplying by $c$ we have
$acx+bcy=c$



Now $$(a,c)(b,c)\left[\frac{a}{(a,c)}\frac{c}{(b,c)}x+\frac{b}{(b,c)}\frac{c}{(a,c)}y\right]=c$$
where of course $\frac{a}{(a,c)}$ etc are integers. So $(a,c)(b,c)\mid c$.
It is clear that $(a,c)(b,c)\mid ab$ since $(a,c)\mid a$ and $(b,c)\mid b$. And therefore we have
$(a,c)(b,c)\mid (ab,c) $.



To show the other direction note that there are $p,q,r,s$ such that




$$ap+qc=(a,c)$$
and $$br+cs=(b,c)$$
thus
$$(a,c)(b,c)=abpr +(aps+brq+qsc)c$$ and this latter is divisible by $(ab,c)$


algorithms - How to find the inverse of $nlog n$?

So I'm on chapter $1$ of introduction to algorithms & at the end the book proposes a problem: here



The answers are there & I was able to work through most of them myself despite my lack of math skills by finding the inverse of the $f(n)$'s in the leftmost column. For instance, the $n^3$ has an inverse of $\sqrt[3]{n}$ but what is the inverse for $n \log_2 n$?



However, there is $n\log_2n = 1000000$ microseconds which I cannot figure out how to find the inverse of ($n = 62746$). I'm writing python code so that way I don't screw up my calculations as I move through the table but I have no idea how to find this.



I'm not very math savvy so if you use notation, can you please name it so I can google it?



If it's not possible to reverse engineer - can someone please explain how was $62746$ calculated for that row/column combination?




Thank you for your help.

sequences and series - How to prove $sum_{k=1}^n cos(frac{2 pi k}{n}) = 0$ for any n>1?


I can show for any given value of n that the equation


$$\sum_{k=1}^n \cos(\frac{2 \pi k}{n}) = 0$$


is true and I can see that geometrically it is true. However, I can not seem to prove it out analytically. I have spent most of my time trying induction and converting the cosine to a sum of complex exponential functions


$$\frac{1}{2}\sum_{k=1}^n [\exp(\frac{i 2 \pi k}{n})+\exp(\frac{-i 2 \pi k}{n})] = 0$$


and using the conversion for finite geometric sequences


$$S_n = \sum_{k=1}^n r^k = \frac{r(1-r^n)}{(1-r)}$$


I have even tried this this suggestion I have seen on the net by pulling out a factor of $\exp(i \pi k)$ but I have still not gotten zero.


Please assist.



Answer



We have $$\sum_{k=1}^{n}\cos\left(\frac{2\pi k}{n}\right)=\textrm{Re}\left(\sum_{k=1}^{n}e^{2\pi ik/n}\right) $$ and so $$\sum_{k=1}^{n}e^{2\pi ik/n}=\frac{e^{2\pi i/n}\left(1-e^{2\pi i}\right)}{1-e^{2\pi i/n}} $$ and notice that $$e^{2\pi i}=\cos\left(2\pi\right)+i\sin\left(2\pi\right)=1 $$ so the claim follows.


linear algebra - Why are elementary row operations useful?

When Elementary Row Operations (EROs) are taught in an introductory setting (for instance, a one-semester multi-department undergrad course), they are often presented as steps in "mental algorithms" to find the inverse of a matrix, or to solve systems of equations. "How to use EROs" is often far more emphasized than "What EROs are". For those who share this experience, row operations remain mysterious and intriguing.



How does one understand what an Elementary Row Operation is? Can EROs be described in such a way that makes their usefulness more apparent?

Tuesday, September 17, 2019

radicals - Prove that the square root of 3 is irrational

I'm trying to do this proof by contradiction. I know I have to use a lemma to establish that if $x$ is divisible by $3$, then $x^2$ is divisible by $3$. The lemma is the easy part. Any thoughts? How should I extend the proof for this to the square root of $6$?

Monday, September 16, 2019

sequences and series - If roots of $x^6=p(x)$ are given then choose the correct option


If $x_1,x_2,x_3,x_4,x_5,x_6$ are real positive roots of the equation $x^6=p(x)$ where $P(x)$ is a 5 degree polynomial where $\frac{x_1}{2}+\frac{x_2}{3}+\frac{x_3}{4}+\frac{x_4}{9}+\frac{x_5}{8}+\frac{x_6}{27}=1$ and $p(0)=-1$, then choose the correct option(s):


$(A)$ $x_5-x_1=x_3x_4$


$(B)$ Product of roots of $P(x)=0$ is $\frac{6}{53}$



$(C)$ $x_2,x_4,x_6$ are in Geometric Progeression


$(D)$ $x_1,x_2,x_3$ are in Arithmetic Progression


Now $x_1,x_2,x_3,x_4,x_5,x_6$ are real positive roots of the equation $x^6=p(x)$, so I wrote it as


$x^6-p(x)=(x-x_1)(x-x_2)(x-x_3)(x-x_4)(x-x_5)(x-x_6)$ but I am not getting how to use the condition $\frac{x_1}{2}+\frac{x_2}{3}+\frac{x_3}{4}+\frac{x_4}{9}+\frac{x_5}{8}+\frac{x_6}{27}=1$ to get the answer. Could someone please help me with this?


Answer



$P(0)=-1$ gives $x_1 x_2 x_3 x_4 x_5 x_6 =1$ Now apply AM-GM to $ x_1/2+ x_2/3+ x_3/4+ x_4/9+ x_5/8 + x_6/27 =1$ \begin{eqnarray*} 1 = \frac{x_1}{2 }+ \frac{x_2}{3 }+ \frac{x_3}{ 4}+ \frac{x_4}{9 }+ \frac{x_5}{8 } + \frac{x_6}{27 } \geq \frac{6 \sqrt[6]{x_1 x_2 x_3 x_4 x_5 x_6}}{6} = 1. \end{eqnarray*} For this bound to attained each of the terms in the sum must be $1/6$ so we have \begin{eqnarray*} x_1= \frac{1}{3} ,x_2= \frac{1}{2} ,x_3= \frac{2}{3} ,x_4= \frac{3}{2} ,x_5= \frac{4}{3} ,x_6= \frac{9}{2} . \end{eqnarray*} Quick check $x_5-x_1=1=x_3 x_4$.


$\sum x_i =53/6$ so (B) is also true. ($p(x)=(x_1+ \cdots+x_6)x^5-\cdots-1$).


$x_2,x_4,x_6 = 1/2,3/2,9/2$ are in geometric progression (common ratio $3$).


$x_1,x_2,x_3 = 1/3,1/2,2/3$ are in arithematic progression ( common difference $1/6$).


Thus all $\color{red}{4}$ statements are true.



Demonstrating $u_0$ and common difference before calculating them in arithmetic progression




I'm getting stuck with a type of exercise on arithmetic progressions I never done before.



$\{u_n\}$ is an arithmetic progression:




  • $u_1+ u_2 + u_3 = 9$


  • $u_{10}+ u_{11} = 40$





I have then to prove that $u_0$ and the common difference $r$ are like:




  • $u_0+2r = 3$


  • $2u_0 + 21r = 40$




Finally, I can calculate $u_0$ and $r$.



I first thought to calculate $u_0$ and $r$ first and then trying to prove the equalities above. But that's not what it is asked in the exercise. I don't know how to manage this exercise.




What shall I do?



Thanks for your answers


Answer



Since $u_1=u_0+r,u_2=u_0+2r,u_3=u_0+3r$, having $u_1+u_2+u_3=9$ gives you
$$(u_0+r)+(u_0+2r)+(u_0+3r)=9\Rightarrow 3u_0+6r=9\Rightarrow u_0+2r=3.$$Also, since $u_{10}=u_0+10r,u_{11}=u_0+11r$, having $u_{10}+u_{11}=40$ gives you
$$(u_0+10r)+(u_0+11r)=40\Rightarrow 2u_0+21r=40.$$


Sunday, September 15, 2019

integration - References on Breaking Integrals into Logarithms

I've seen that (tough) integrals may be broken into answers in logarithmic form. In other words, many integrals have an alternate answer that is in the form of a function involving logarithms. An example is this question, which gives an alternate answer in terms of logarithms.



I'd like to know much more about breaking integrals into logarithms. Is there a method that can accomplish this without luck? I've read a reference (actually pictures of a book, I believe) that stated something like any integral can be broken into this logarithmic form. I'd like to know what is known about this, and I'd be delighted if someone could reference this research.



I'm looking into an algorithm to do very tough integration, and wonder if this technique is anywhere close to feasible.

Real valued function which is continuous only on transcendental numbers

First of all, I am sorry for asking this question.



We know that $R$ is uncountable. And also the set of all transcendental numbers is uncountable.
How can I construct a function $f(x)$ on $R$ which is continuos only at transcendental numbers? Is it possible?



Thanks in advance.

calculus - Fresnel integral $intlimits_0^inftysin(x^2) dx$ calculation



I'm trying to calculate the improper Fresnel integral $\int\limits_0^\infty\sin(x^2)dx$ calculation.
It uses several substitutions. There's one substitution that is not clear for me.



I could not understand how to get the right side from the left one. What subtitution is done here?



$$\int\limits_0^\infty\frac{v^2}{1+v^4} dv = \frac{1}{2}\int\limits_0^\infty\frac{1+u^2}{1+u^4} du.$$







Fresnel integral calculation:



In the beginning put $x^2=t$ and then: $$\int\limits_0^\infty\sin(x^2) dx = \frac{1}{2}\int\limits_0^\infty\frac{\sin t}{\sqrt{t}}dt$$



Then changing variable in Euler-Poisson integral we have: $$\frac{2}{\sqrt\pi}\int_0^\infty e^{-tu^2}du =\frac{1}{\sqrt{t} }$$



The next step is to put this integral instead of $\frac{1}{\sqrt{t}}$.

$$\int\limits_0^\infty\sin(x^2)dx = \frac{1}{\sqrt\pi}\int\limits_0^\infty\sin(t)\int_0^\infty\ e^{-tu^2}dudt = \frac{1}{\sqrt\pi}\int\limits_0^\infty\int\limits_0^\infty \sin (t) e^{-tu^2}dtdu$$
And the inner integral $\int\limits_0^\infty \sin (t) e^{-tu^2}dt$ is equal to $\frac{1}{1+u^4}$.



The next calculation: $$\int\limits_0^\infty \frac{du}{1+u^4} = \int\limits_0^\infty \frac{v^2dv}{1+v^4} = \frac{1}{2}\int\limits_0^\infty\frac{1+u^2}{1+u^4} du = \frac{1}{2} \int\limits_0^\infty\frac{d(u-\frac{1}{u})}{u^2+\frac{1}{u^2}} $$
$$= \frac{1}{2} \int\limits_{-\infty}^{\infty}\frac{ds}{2+s^2}=\frac{1}{\sqrt2}\arctan\frac{s}{\sqrt2} \Big|_{-\infty}^\infty = \frac{\pi}{2\sqrt2} $$



In this calculation the Dirichle's test is needed to check the integral $\int_0^\infty\frac{\sin t}{\sqrt{t}}dt$ convergence. It's needed also to substantiate the reversing the order of integration ($dudt = dtdu$). All these integrals exist in a Lebesgue sense, and Tonelli theorem justifies reversing the order of integration.



The final result is $$\frac{1}{\sqrt\pi}\frac{\pi}{2\sqrt2}=\frac{1}{2}\sqrt\frac{\pi}{2}$$


Answer




Well, if one puts $v=\frac{1}{u}$ then:
$$I=\int_0^\infty\frac{v^2}{1+v^4} dv =\int_0^\infty\frac{1}{1+u^4} du$$
So summing up the two integrals from above gives:
$$2I=\int_0^\infty\frac{1+u^2}{1+u^4} du$$


divisibility - Show that the number $n$ is divisible by $7$

How can I prove that $n = 8709120$ divisible by $7$?
I have tried a couple methods, but I can't show that. Can somebody please help me?

calculus - Evaluating $int_0^infty frac {cos {pi x}} {e^{2pi sqrt x} - 1} mathrm d x$



I am trying to show that$$\displaystyle \int_0^\infty \frac {\cos {\pi x}} {e^{2\pi \sqrt x} - 1} \mathrm d x = \dfrac {2 - \sqrt 2} {8}$$



I have verified this numerically on Mathematica.




I have tried substituting $u=2\pi\sqrt x$ then using the cosine Maclaurin series and then the $\zeta \left({s}\right) \Gamma \left({s}\right)$ integral formula but this doesn't work because interchanging the sum and the integral isn't valid, and results in a divergent series.



I am guessing it is easy with complex analysis, but I am looking for an elementary way if possible.


Answer



This integral is one of Ramanujan's in his Collected Papers where he also shows the connection with the sin case.



Define $$\int_{0}^{\infty}\frac{\cos(\frac{a\pi x}{b})}{e^{2\pi \sqrt{x}}-1}dx$$



If a and b are both odd. In this case, they are both a=b=1.




Then, $$\displaystyle \frac{1}{4}\sum_{k=1}^{b}(b-2k)\cos\left(\frac{k^{2}\pi a}{b}\right)-\frac{b}{4a}\sqrt{b/a}\sum_{k=1}^{a}(a-2k)\sin\left(\frac{\pi}{4}+\frac{k^{2}\pi b}{a}\right)$$



letting a=b=1 results in your posted solution.


calculus - Are some indefinite integrals impossible to compute or just don't exist?





I've just started working with integrals relatively recently and I am so surprised how much harder they are to compute than derivatives. For example, for something as seemingly simple as $\int e^{ \cos x} dx $ is impossible right? I can't use u-sub since there is no $-\sin(x)$ multiplying the function, also integration by parts seems like it wouldn't work, correct? So does this mean this integral is impossible to compute?


Answer



The indefinite integral of a continuous function always exists. It might not exist in "closed form", i.e. it might not be possible to write it as a finite expression using "well-known" functions. The concept of "closed form" is
somewhat vague, since there's no definite list of which functions are "well-known". A more precise statement is that there are elementary functions whose indefinite integrals are not elementary. For example, the indefinite integral $\int e^{x^2}\; dx$ is not an elementary function, although it can be expressed in terms of a non-elementary special function as $\frac{\sqrt{\pi}}{2} \text{erfi}(x)$.



Your example $\int e^{\cos(x)}\; dx$ is also non-elementary. This can be proven using the Risch algorithm. This one does not seem to have any non-elementary closed form either.


real analysis - Prove that the function $f(x)=frac{sin(x^3)}{x}$ is uniformly continuous.


Continuous function in the interval $(0,\infty)$ $f(x)=\frac{\sin(x^3)}{x}$. To prove that the function is uniformly continuous. The function is clearly continuous. Now $|f(x)-f(y)|=|\frac{\sin(x^3)}{x}-\frac{\sin(y^3)}{y}|\leq |\frac{1}{x}|+|\frac{1}{y}|$. But I don't think whether this will work.


I was trying in the other way, using Lagrange Mean Value theorem so that we can apply any Lipschitz condition or not!! but $f'(x)=3x^2\frac{\cos(x^3)}x-\frac{\sin(x^3)}{x^2}$


Any hint...


Answer



Hint:


Any bounded, continuous function $f:(0,\infty) \to \mathbb{R}$ where $f(x) \to 0$ as $x \to 0,\infty$ is uniformly continuous. The derivative if it exists does not have to be bounded.



Note that $\sin(x^3)/x = x^2 \sin(x^3)/x^3 \to 0\cdot 1 = 0$ as $x \to 0$.


This is also a great example of a uniformly continuous function with an unbounded derivative.


Saturday, September 14, 2019

real analysis - Recursive sequence with square root

I came across this (cool) question this weekend
Find the limit of the following sequence as $n$ approaches infinity.
$x_1 = 1$ and $x_{n+1} = \sqrt{x_n^2+\frac{1}{2}^n}$




I had two questions about it. I approximated it using excel to be about 1.224745. Does anyone know if there is an exact expression for what this converges to? Also, on problems like this, I normally use the fact that if a sequence converges all of its sub-sequences converge, and have the same limit. Plugging in $L$ for all of the $x_n$'s and solving didn't get me anywhere. Why does this "trick" not work here, and what other sort of recursive sequences does it fail for? What are some other methods for finding the limit in a situation like this? Thanks in advance.

calculus - Integration by Euler's formula




How do you integrate the following by using Euler's formula, without using integration by parts? $$I=\displaystyle\int \dfrac{3+4\cos {\theta}}{(3\cos {\theta}+4)^2}d\theta$$



I did integrate it by parts, by writing the $3$ in the numerator as $3\sin^2 {\theta}+3\cos^2{\theta}$, and then splitting the numerator.



But can it be solved by using complex numbers and the Euler's formula?


Answer



Hint



When you have an expression with a squared denominator, you could think that the solution is of the form $$I=\displaystyle\int \dfrac{3+4\cos {\theta}}{(3\cos {\theta}+4)^2}~d\theta=\frac{a+b\sin \theta+c\cos \theta}{3\cos {\theta}+4}$$ Differentiate the rhs and identify terms. You will get very simple results.



abstract algebra - Why is $n_1 sqrt{2} +n_2 sqrt{3} + n_3 sqrt{5} + n_4 sqrt{7} $ never zero?

Here $n_i$ are integral numbers, and not all of them are zero.



It is natural to conjecture that similar statement holds for even more prime numbers. Namely,



$$ n_1 \sqrt{2} +n_2 \sqrt{3} + n_3 \sqrt{5} + n_4 \sqrt{7} + n_5 \sqrt{11} +n_6 \sqrt{13} $$ is never zero too.



I am asking because this is used in some numerical algorithm in physics

calculus - prove that if $lim limits_{n to infty}F( a_n)=ell$, then $lim limits_{x to infty}F( x)=ell$



Let $a_n$ be a strictly increasing sequence of positive real numbers ($a_n>0$) and $\lim \limits_{n \to \infty} a_n=\infty.$ Suppose that the sequence $(a_{n+1}-a_n)_n$ is bounded. Let $F:\mathbb{R}^+\rightarrow \mathbb R$ be a differentiable function. Ssuppose further that $\lim \limits_{x \to \infty}F'(x)=0$ and $\lim \limits_{n \to \infty}F( a_n)=\ell$. (Note that $\ell$ is a real number). Prove that: $$\lim \limits_{x \to \infty}F( x)=\ell$$ I'm stuck, I've tried to use the MVT but it gets me nowhere! can anyone help me solve this problem?


Answer



Let $B$ be an uper bound for $|a_{n+1}-a_n|$.



Let $\epsilon>0$.

Pick $M\in \Bbb R$ large enough so that $M>a_1$ and such that $|F'(x)|<\frac{\epsilon}{2B}$ for all $x>M$ (possible because $F'(x)\to 0$) and $|F(a_n)-l|<\frac\epsilon2$ for all $n$ with $a_n>M$ (possible because $F(a_n)\to l$).
Let $x>M$. Tnen there exists $n$ with $a_nWe conclude that for all $x>M$ we have $|F(x)-l|<\epsilon$. In summary, $F(x)\to l$.






Some of the given conditions are indeed necessary:




  • Consider $F(x)=x$ and $a_n=2-\frac1n$ and $l=2$. Here $a_n\not\to \infty$


  • Consider $F(x)=\sin x$ and $a_n=n\pi$ and $l=0$. Here $F'(x)\not\to 0$ and the conclusion does not hold.

  • Consider $F(x)=\sin \sqrt x$ and $a_n=n^2\pi$ and $l=0$. Here $|a_n-a_{n+1}|$ is not bounded and the conclusion fails.



However, the claim remains valid also if some (necessarily only finitely many) of the $a_n$ are $\le 0$, or if $a_n$ fails to be strictly increasing. (A closer look at the proof above reveals tha twe did not really use thiese properties, though they may simplify the argumentation).


Friday, September 13, 2019

$sum_{i=0}^n frac{1}{(i+3)(i+4)} = frac{n}{4(n+4)}$ (prove by induction)

I'm having some difficulty proving by induction the following statement.



$$\sum_{i=0}^n \frac{1}{(i+3)(i+4)} = \frac{n}{4(n+4)}$$



I have shown that $\sum_{i=0}^n \frac{1}{(i+3)(i+4)} = \frac{n}{4(n+4)}$ holds for $n=1$ (equals $\frac{1}{20}$) , but I am getting stuck on the induction step.



As far as I know I have to show $$\sum_{i=0}^n \frac{1}{(i+3)(i+4)} = \frac{n}{4(n+4)}$$
implies
$$\sum_{i=0}^{n+1} \frac{1}{(i+3)(i+4)} = \frac{n+1}{4(n+5)}$$




To do this I think I should add the number $\frac{1}{(n+4)(n+5)}$ to $\frac{n}{4(n+4)}$ and see if it gives $\frac{n+1}{4(n+5)}$ , if I am not mistaken.



When trying to do that however I get stuck. I have:



$$\frac{n}{4(n+4)} +\frac{1}{(n+4)(n+5)} = \frac{n(n+4)(n+5)}{4(n+4)^2(n+5)} + \frac{4(n+4)}{4(n+4)^2(n+5)} = \frac{n(n+4)(n+5)+4(n+4)}{4(n+4)^2(n+5)} = \frac{n(n+5)+4}{4(n+4)(n+5)}$$



However beyond this point I don't know how to reach $\frac{n+1}{4(n+5)}$ I always just end up at the starting point of that calculation.



So I think that either my approach must be wrong or I am missing some trick how to simplify $$\frac{(n(n+5)+4}{4(n+4)(n+5)}$$




I would be very grateful for any help, as this is a task on a preparation sheet for the next exam and I don't know anyone, that has a correct solution.

real analysis - Problem with finding more

I am sorrow if this question sounds silly but I really need help on this.



Question:



It takes 4 reps to complete 32 transactions. Each transaction takes 3 minutes each (180 seconds).



How many reps will it take to process these 32 transactions within 0.30 seconds?

calculus - Bernoulli numbers in the general series-expansion formula for sums of powers?

We know the series-expansion for these "sum of powers":



$$\sum_{i=0}^n i = \frac{n(n+1)}{2} = \frac{1}{2}n^2+\frac{1}{2}n$$
$$\sum_{i=0}^n i^2 = \frac{n(n+1)(n+2)}{6} = \frac{1}{3}n^3+\frac{1}{2}n^2+\frac{1}{6}n$$
$$\sum_{i=0}^n i^3 = \frac{n^2(n+1)^2}{4} = \frac{1}{4}n^4+\frac{1}{2}n^3+\frac{1}{4}n^2$$
$$...$$
But can we come up with a general formula?
$\sum_{i=0}^n i^P =$ "Compact form" = "Series form"
Using Mathematica, I generated sums of higher powers to look for overall patterns in the coefficients and the polynomials. I could not find anything significant in the "compact form" but I found several in the "series form." Take a look:
Series expansion formula for sum of powers from P=1 to P=15 ;; Coefficients only
So what I found was that the "sum of powers" formula has this general equation $S(n)=An^{P+1}+Bn^P+Cn^{P-1}+Dn^{P-2}+...+En $. If we include the fact that some of the coefficients are $0$, then there are $P+1$ terms in this polynomial equation. I was tackling this problem with the assumption that $P$ was a natural number but it turns out that this general equation is valid for any real number $P$. Also, the general series expansion formula is equivalent to the series expansion for the generalized harmonic number (https://en.wikipedia.org/wiki/Harmonic_number).
Anyway, here are the patterns I found in the coefficients:

$$A = \frac{1}{P+1}, B=\frac{1}{2}, C=\frac{P}{12} $$
$D=0$ and every second letter after D is $0$.
$$E=\frac{-P(P-1)(P-2)}{720}, G=\frac{P(P-1)(P-2)(P-3)(P-4)}{720},$$ $$I=\frac{-P(P-1)(P-2)(P-3)(P-4)(P-5)(P-6)}{1209600},...$$
And so on. It appears that for every coefficient letter, this is the numerator: $$\frac{P!(-1)^{0.5k+1.5}}{(P-k)!}$$ where k is the $k^{th}$ letter after B with k starting at 1 (and k is odd in this formula since every second letter after D is $0$). But where does the denominator come from? If you examine more coefficient letters, the denominator seems to be growing exponentially (factorial?), but that is all I could find. $$$$
Update! I later found the generalized formula here: https://en.wikipedia.org/wiki/Faulhaber%27s_formula. This is what the denominator equals:
$$\frac{(k+1)!}{|B(k+1)|}$$ Amazing how the Bernoulli numbers belong in this formula. That explains the alternating sums and the coefficients of $0$. So here is the complete equation:
$$\sum_{i=0}^n i^P =\frac{1}{P+1}n^{P+1}+\frac{1}{2}n^P+\sum_{k=1}^{P-1} \frac{B(k+1)P!}{(P-k)!(k+1)!}n^{P-k}$$
This can be further condensed, assuming $B(1)=+\frac{1}{2}$:
$$\sum_{i=0}^n i^P =\sum_{k=-1}^{P-1} \frac{B(k+1)P!}{(P-k)!(k+1)!}n^{P-k}$$



Interesting. The equation kind of resembles the binomial expansion formula (https://en.wikipedia.org/wiki/Binomial_theorem). But why do the Bernoulli numbers appear in this formula? I looked at several sites (https://en.wikipedia.org/wiki/Bernoulli_number, http://math.ucr.edu/~res/math153/s12/bernoulli-numbers.pdf, https://ncatlab.org/nlab/show/Bernoulli+number), but the explanations are not intuitive. From what I see it, "they just do" but I do not understand why. Let me know if there are any mistakes with my equations or if there is anything I am missing.

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...