Monday, July 31, 2017

proof verification - Elementary demonstration; $p$ prime, $1 lt a lt p$, $;1 lt b lt p quad$ Then $ pnmid a b$

Update: If this question is of interest, you can also click here.







Update: Using Bill Dubuque's lemma and logic proving Euclid's lemma, we can supply an elementary proof.



To get a contradiction, assume than $p \mid a b$.



Let $S = \{n \in \Bbb N \, | \, p \mid nb \}$. Then $p \in S$ and $a \in S$. Moreover, $S$ is closed under subtraction.



Let $d = \text{min(}S\text{)}$. By the lemma, $d \mid p$, so $d = 1$ or $d = p$.



If $d = 1$, since $d \in S$, it must follow that $p \mid (1 \times b)$, which is absurd since $b \lt p$.




By the lemma, $d \mid a$, so if $d = p$ then $p \mid a$, which is absurd since $a \lt p$.






I've been motivated (see this) to prove the following result using only elementary techniques.



Let $p$ be a prime greater than $2$.



Let $1 \lt a \lt p$




Let $1 \lt b \lt p$



Then



$$\tag 1 p\nmid a b$$



I think that this is as simple as first showing that



$$ \text{For every integer n } \ge 1 \text{ such that } p\nmid n, \; \; p\nmid na$$




and ironing out some details.




Using only the 'first page' of elementary theory of the natural numbers/integers (for example Euclidean division, the construction of $\Bbb Z$, the existence of prime factorizations and that modular arithmetic is well-defined), can this approach work for proving $\text{(1)}$?




Besides answering yes in the comments, a proof would be appreciated (this elementary approach can be exhausting).

soft question - Paradox: increasing sequence that goes to $0$?


It is $10$ o'clock, and I have a box.


Inside the box is a ball marked $1$.



At $10$:$30$, I will remove the ball marked $1$, and add two balls, labeled $2$ and $3$.
At $10$:$45$, I will remove the balls labeled $2$ and $3$, and add $4$ balls, marked $4$, $5$, $6$, and $7$.
$7.5$ minutes before $11$, I will remove the balls labeled $4$, $5$, and $6$, and add $8$ balls, labeled $8$, $9$, $10$, $11$, $12$, $13$, $14$, and $15$.
This pattern continues.


Each time I reach the halfway point between my previous action and $11$ o'clock, I add some balls, and remove some other balls. Each time I remove one more ball than I removed last time, but add twice as many balls as I added last time.


The result is that as it gets closer and closer to $11$, the number of balls in the box continues to increase. Yet every ball that I put in was eventually removed.


So just how many balls will be in the box when the clock strikes $11$?
$0$, or infinitely many?
What's going on here?


Answer



[Edit.] I am editing my answer to try to give some more insight, prompted by comments on my original response. I hope to develop a deeper idea into what's going on here.


The short answer is that what is important is not the size of the set of balls at any particular time, but rather how the set of balls changes; and ultimately what the limit of those sets are. The key is to determine what that limit is, and then determine how many balls are in that limiting set. The answer is that the limit is the empty set, which has size 0. The rest of this answer is devoted to describing this in some detail.


Part of what I have added to my answer is to point out that although there are multiple ways of measuring convergence — in terms of various norms on characteristic functions — only one of these actually defines the limit of the sequence, and in this case the limit is well-defined.


What matters is not the number of balls, but the set of balls


In this problem, we have more than just a number of balls which changes with time. What's different is that each of these balls has a unique identity.



This might not seem like it should matter, but it means that the state of the "system" is not a quantity of balls but a set of balls. That set has a certain size, but the size is a derived feature of the system; it follows from which particular set of balls is present. So it is important to determine what the limit of the sequence of sets is.


Description of the problem in terms of sets


Let's consider how the set of balls in the box change with time in the game you present. At step $n$, you add $2^n$ balls, and remove the $n$ lowest-numbered balls. The "state of the system" is given by the following sets:


$S(0) = \{1\}$
$S(1) = \{2,3\}$
$S(2) = \{4,5,6,7\}$
$S(3) = \{7,8,9,10,11,12,13,14,15\}$
$S(4) = \{11,12,\dots,31\}$
etc.


Note that after step 1, the first ball is removed, never to be added again; so it is not an element of the final set. Similarly, at step 2, the second and third balls are removed, never to be added again; so they aren't elements of the final set. And so on. So... none of the balls are in the final set. So then it must be empty! It doesn't matter that the number of balls in the sequence are increasing; what matters is that the number of balls which will never again be in the box is also increasing, and in the limit includes all of the balls.


We can make this more striking by considering, for each step $n$, the set of balls $I(n)$ which is in the set $S(t)$ for all $t \geq n$: that is, $I(n) = S(n) \cap S(n+1) \cap S(n+2) \cap \dots $. Because each ball is eventually removed, never to be added again; this means that


$I(0) = I(1) = I(2) = \dots = \varnothing$


So while the original description makes it look like, moment-to-moment, the final state of the box should be to hold infinitely many balls, a more "forward looking" approach shows that it's clear that the final state of the box is to be empty.


An analysis in terms of characteristic functions


We can contrast the "intuitive" answer of infinitely many balls in the box, and the more precise answer that there ultimately are no balls in the box, using characteristic functions: that is, we replace each set $S(n)$ by a function $f_n : \mathbb{N} \to \{ 0, 1\}$ which is $1$ for those integers belonging to $S(n)$, and $0$ otherwise.



Consider the various $p$-norms on these functions. The cardinality of each set $S(n)$ is precisely equal to the $1$-norm of the function $f_n$ , which grows without bound. The fact that the 1-norm grows without bound — and in particular, the distance between the function $f_n$ and the zero function $\mathbf{0}$ in the $1$-norm grows without bound — is essentially the source of most people's intuition about this problem, and exactly the reason why they find it counterintuitive that the final set should be empty.


But just as the size of a set is a derived quantity, the norm of a function — or of a sequence of functions — is also a derived quantity; and the norm of the limit of a sequence of functions is not necessarily the limit of the norms. In fact, the functions $f_n$ don't converge to anything, in any of the p-norms; it simply diverges to nothing in particular.


But there is at least one notion of convergence which applies to the functions $f_n$, and that is point-wise convergence — the form of convergence which is the broadest, in the sense that it applies to the most cases (and with which all other notions of convergence must agree, if they show that a sequence of functions converges at all). We may simply show that for each $x$, we have $f_n (x) = 0$ for sufficiently large n. It then follows that the sequence $f_n$ converges to $\mathbf{0}$.


The fact that the sequence $f_n$ doesn't converge to $\mathbf{0}$ under any of the p-norms doesn't matter; ultimately what matters is that the sequence converges point-wise, because what we're interested in is the cardinality of the limit itself, which is defined in terms of point-wise convergence. At worst, from a certain aesthetic point of view, one might say that it doesn't converge particularly gracefully (informally speaking) to $\mathbf{0}$; but it does indeed converge, and that is all that matters.


So: using characteristic functions, which is ultimately equivalent to the sets described in the first place, one can show that the sequence of sets does converge, and what they converge to is the empty set. But take comfort that your intuition that they should not reflects a certain awareness of the concept of $p$-norms. :-)


Sunday, July 30, 2017

real analysis - bijective measurable map existence



Does there exist bijective measurable maps between $\mathbb{R}$ and $\mathbb{R}^n$?



If so, could you give me an example of that?



Thank you.


Answer




Polish space is a topological space that is isomorphic to a complete separable metric space, for example $\Bbb R^n$ for any $n\in \Bbb N$. For the proof of the following fact, see e.g. here.




Any uncountable Polish space is Borel isomorphic (there exists a bimeasurable bijection) to the space of real numbers $\Bbb R$ with standard topology.



the eigenvectors of two different square matrices that have the same eigenvalue

I have two square matrices $Y$ and $Z$ size $n$, and matrix $M = Z^{-1}YZ$ eigenvalue is the same as Matrix $Y$'s eigenvalue. I have been able to prove that the eigenvalues are the same, and thus the characteristic polynomial of $Z^{-1}YZ$ $=$ $Y$ as the $|Y| = |Z^{-1}YZ|$ because the determinants are commutative and the determinant of an inverse matrix is $1/|Matrix|$. However, the eigenvectors will be different, am stuck here.



To put it more clearly:




What are the eigenvectors of matrices $Y$ and $Z^{-1}YZ$, they are both square matrices $n$ and the eigenvalues of $Y$ are the same as $Z^{-1}YZ$?

Dividing versus multiplying by inverse in modular arithmetic



I was working on a problem that resulted in the calculation:

$20 \equiv 10x \pmod{11}$.



I got the answer $x \equiv 2 \pmod{11}$ with the thought process: Since $10$ is a factor of $20$ I can rewrite $20 \pmod{11}$ as $10x \pmod{11}$ with $x=2$. But isn't this basically the same as using division, which we aren't supposed to do in modular spaces?



If I solve by multiplying $20$ by the multiplicative inverse of $10 \pmod{11}$ which is $10$, I get $200 \pmod{11}$ which simplifies to $2 \pmod{11}$ anyway. Did these numbers just happen to be the same or is my original method valid?


Answer



Your original method works only because $10$ has an inverse modulo $11$. You wrote $20 \equiv 10x \pmod{11}$ as
$$ 10(2) \equiv 10x \pmod{11}. $$
Now, using the fact that $10^{-1}$ exists, we can multiply by it to conclude $2 \equiv x \pmod{11}$. If $10$ wasn't invertible, we wouldn't be able to conclude this. For instance, $2(2) \equiv 2(3) \pmod{2}$, but $2\not\equiv 3 \pmod{2}$.


Saturday, July 29, 2017

calculus - Find $limlimits_{n to infty} frac{x_n}{n}$ when $limlimits_{n to infty} x_{n+k}-x_{n}$ exists





Let $(x_n)_{n \geq 1}$ be a sequence with real numbers and $k$ a fixed natural number such that $$\lim_{n \to \infty}(x_{n+k}-x_n)=l$$



Find
$$\lim_{n \to \infty} \frac{x_n}{n}$$




I have a strong guess that the limit is $\frac{l}{k}$ and I tried to prove it using the sequence $y_n=x_{n+1}-x_n$. We know that $\lim_{n \to \infty}(y_n+y_{n+1}+\dots+y_{n+k-1})=l$ and if we found $\lim_{n \to \infty}y_n$ we would have from the Cesaro Stolz lemma that $$\lim_{n \to \infty}\frac{x_n}{n}=\lim_{n \to \infty}y_n$$


Answer



For fixed $m \in \{ 1, \ldots, k \}$ the sequence $(y_n)$
defined by $y_n = x_{m+kn}$ satisfies

$$
y_{n+1} - y_n = x_{(m+kn) + k} - x_{m+kn} \to l \, ,
$$
so that Cesaro Stolz can be applied to $(y_n)$. It follows that $\frac{y_n}{n} \to l$ and
$$
\frac{x_{m+kn}}{m+kn} = \frac{y_n}{n} \cdot \frac{n}{m+kn} \ \to \frac{l}{k} \text{ for } n \to \infty \, .
$$
This holds for each $m \in \{ 1, \ldots, k \}$, and therefore
$$
\lim_{n \to \infty} \frac{x_n}{n} = \frac lk \, .

$$


algebra precalculus - If $0^circleqslant x

I tried solving the question, but I kept getting $5$ solutions. My book only has $4$ choices: $0$, $1$, $2$, or $3$ solutions. My solutions were $0^\circ$, $90^\circ$, $150^\circ$, $180^\circ$, and $270^\circ$. What did I do wrong? Why is $2$ solutions the correct answer?

Friday, July 28, 2017

number theory - Least quadratic non residue algorithm




I am trying to implement Tonelli-Shanks algorithm and at one of the steps I have to find the least quadratic non residue. I've searched the web for a while for some kind of algorithm but so far I've seen only paper with lot's of math behind them.
Are there any algorithms for finding least non negative residues our there? Or could someone advise me a paper with good explanations and a couple of examples so I can formalize the algorithm myself?


Answer



I would not worry about it. The least quadratic non-residue modulo a fixed prime $q$ is also a prime, so just check the primes $2,3,5,7,11,13,17, \ldots$ in order until the Legendre symbol says you have a non-residue. The important thing is that the first non-residue is really, really small compared to the prime itself. See OEIS.



Oh, why is the first nonresidue prime? Call the prime $q$ and the first nonresidue $N.$ Since $1$ is always a residue, we have $N > 1.$ Since exactly half the numbers from $1$ to $q-1$ are residues, half nonresidues, we know $N < q.$ If $N$ were also composite, we would have $N = ab$ with $1 < a,b < N.$ Since $N$ is the smallest nonresidue, this means $a,b$ would be residues. But the product of quadratic residues is another quadratic residue, which would mean $N=ab$ would need to be a quadratic residue. This is a contradiction, so $N$ is prime.


sequences and series - Peculiar Sum regarding the Reciprocal Binomial Coefficients



Whilst playing around on Wolfram Alpha, I typed in the sum

$$\sum_{x=0}^\infty \frac{1}{\binom{2x}{x}}=\frac{2}{27}(18+\pi\sqrt 3)$$
I'm not sure how to derive the answer. My first instinct was to expand the binomial coefficient to get
$$\sum_{x=0}^\infty \frac{x!^2}{(2x)!}$$
and then to try using a Taylor Series to get the answer. I thought that if I could find a function $f(n)$ with
$$f(n)=\sum_{x=0}^\infty \frac{x!^2n^x}{(2x)!}$$
Then my sum would be equal to $f(1)$. How do I find such a function?



EDIT: I continued on this path and realized that I can use this to set up a recurrence relation for $f^{(x)}(0)$:



$$f^{(0)}(0)=1$$

$$f^{(x)}(0)=\frac{x^2}{2x(2x-1)}f^{(x-1)}(0)$$



However, I'm not sure how this helps me find $f(1)$...



Am I on the right track? Can somebody help me finish what I started, or point me towards a better method of calculating this sum?



Thanks!


Answer



Hint. One may observe that
$$

\frac{1}{\binom{2n}{n}}=n\int_0^1 t^{n-1}(1-t)^ndt,\qquad n\ge1,
$$ giving
$$
\sum_{n=0}^\infty\frac{1}{\binom{2n}{n}}=1+\int_0^1 \sum_{n=1}^\infty nt^{n-1}(1-t)^n\:dt=1+\int_0^1\frac{t-1}{\left(t^2-t+1\right)^2}dt=\frac{2}{27} \left(18+\sqrt{3} \pi \right)
$$ the latter integral is classically evaluated by partial fraction decomposition.


integration - Show that $intlimits_0^{infty}frac {dt}te^{cos t}sinsin t=frac {pi}2(e-1)$



How do you show that




$$\int_{0}^{\infty}\frac{\mathrm{d}t}{t}\,\mathrm{e}^{\cos\left(t\right)}\,
\sin\left(\sin\left(t\right)\right) =

\frac{\pi}{2}\,\left(\,\mathrm{e} - 1\right)$$




I managed to get the left-hand side to equal the imaginary part of$$I=\int\limits_0^{\infty}\frac {dt}te^{e^{it}}$$But I’m not very sure what to do next. I’m thinking of a substitute $t\mapsto e^{it}$, but I’m not very sure how to evaluate the limit as $t\to\infty$. I also tried contour integration, but I’m not exactly sure what contour to draw.


Answer



$$e^{\cos t}\sin\sin t=\text{Im}\exp\left(e^{it}\right)=\text{Im}\sum_{n\geq 0}\frac{e^{nit}}{n!}=\sum_{n\geq 1}\frac{\sin(nt)}{n!} $$
and since for any $a>0$ we have $\int_{0}^{+\infty}\frac{\sin(at)}{t}\,dt=\frac{\pi}{2}$ it follows that
$$\int_{0}^{+\infty}e^{\cos t}\sin\sin t\frac{dt}{t} = \frac{\pi}{2}\sum_{n\geq 1}\frac{1}{n!}=\frac{\pi}{2}(e-1), $$
pretty simple.







I have a counter-proposal:
$$\begin{eqnarray*} \int_{0}^{+\infty}\left(e^{\cos t}\sin\sin t\right)^2\frac{dt}{t^2} &=&\frac{\pi}{2}\sum_{m,n\geq 1}\frac{\min(m,n)}{m!n!}\\&=&-\frac{\pi}{2}I_1(2)+\pi e(e-1)-2\pi e\int_{0}^{1}I_1(2x)e^{-x^2}\,dx. \end{eqnarray*}$$


Thursday, July 27, 2017

calculus - How does $a^2 + b^2 = c^2$ work with ‘steps’?




We all know that $a^2+b^2=c^2$ in a right-angled triangle, and therefore, that $c, so that walking along the red line would be shorter than using the two black lines to get from top left to bottom right in the following graphic:






Now, let's assume that the direct way using the red line is blocked, but instead, we can use the green way in the following picture:





Obviously, the green way isn't any shorter than the black one, it's just $a/2+b/2+a/2+b/2 = a+b$. Now, we can divide the green path again, just like the black path, and get to the purple path. Dividing this one in two halfs again, we get the yellow path:






Now obviously, the yellow path is still as long as the black path from the beginning, it's just $8*a/8+8*b/8=a+b$. But if we do this segmentation again and again, we approximate the red line - without making the way any shorter. Why is this so?


Answer



Essentially,
it is because the distance of the
stepped curve from the line
does not get small compared
to the length of the steps.



An example where the limit

is properly found
is dividing a circle
into $n$ equal parts
and computing the sum of the
line segments
connecting the endpoints
of the arcs.
This $does$ converge
to the length of the circle
because the height of each arc

gets arbitrarily small
compared to the length of each arc
as $n$ gets large.


logarithms - If ln is given paricular times can we find least value for which it is defined?




I was just doing some time-pass with my calculator but then I observe something.I don't know whether it is senseful to ask.So here's my question. ln ln (1) is not defined but for all values greater than 1 it is defined.So then I try to find values for which ln ln ln(x) is defined,then I get to know that it get's defined from 2.72.If ln is taken 4 times it's start giving values from 15.2.So my question is if ln is given particular times how I can come to know the infimum of values for which it is defined?


Answer



$\ln (x)$ is defined for $x>0$



$\ln (\color{blue}{\ln (x)})$ will be defined for $\color{blue}{\ln (x)}>0 \implies x >1$



$\ln (\color{blue}{\ln ( \ln (x))})$ is defined for $\color{blue}{\ln ( \ln (x))}>0 \implies \color{blue}{ \ln (x)}>1 \implies x>e$



You see the pattern now?




$$0, e^0, e^1, e^e, e^{e^e} \ldots$$


Circular logic in evaluation of elementary limits




Our calculus book states elementary limits like $\lim_{x\to0} \frac{\sin x}{x}=1$, or $\lim_{x\to0} \frac{\ln (1+x)}{x}=1$, $\lim_{x\to0} \frac{e^x-1}{x}=1$ without proof.



At the end of the chapter of limits, it shows that these limits can be evaluated by using series expansion (which is not in our high school calculus course).



However, series expansion of a function can only be evaluated by repeatedly differentiating it.



And, to calculate derivative of $\sin x$, one must use the $\lim_{x\to0} \frac{\sin x}{x}=1$.



So this seems to end up in a circular logic. It is also same for such other limits.




I found that $\lim_{x\to0} \frac{e^x-1}{x}=1$ can be proved using binomial theorem.



How to evaluate other elementary limits without series expansion or L'Hôpital Rule?



This answer does not explain how that limit can be evaluated.


Answer



Hint to prove it yourself:



enter image description here




Let $A_1$ be the area of triangle $ABC$, $A_2$ be the area of arc $ABC$, and $A_3$ be the area of ABD. Then we have:



$$A_1

Try and find expressions for $A_1,A_2,$ and $A_3$ and fill them into the inequality and finally use the squeeze theorem.


Solving the Functional Equation $ f big( x + y f ( x ) big) = f ( x ) f ( y ) $



I want to find all functions $ f : \mathbb R \to \mathbb R $ satisfying the functional equation $ f \big( x + y f ( x ) \big) = f ( x ) f ( y ) $, for all real numbers $ x $ and $ y $.



An interesting fact about the functional equation is a symmetry that is not at first sight visible. If one substitutes $ x + z f ( x ) $ for $ x $ in the functional equation, the equation $ f \big( x + z f ( x ) + y f ( x ) f ( z ) \big) = f ( x ) f ( y ) f ( z ) $ follows, which is also the result of substituting $ z + y f ( z ) $ for $ y $ in the original equation.




This is my attempt:



Clearly, for every constant real number $ a $, $ f ( x ) = a x + 1 $ is a solution. The constant zero function is another solution. I conjecture that these are the only solutions.



It's easy to see that $ f \big( x + y f ( x ) \big) = f \big( y + x f ( y ) \big) $ and thus if $ f $ is injective, one must have $ x + y f ( x ) = y + x f ( y ) $, which letting $ y = 1 $ yields $ f ( x ) = \big( f ( 1 ) - 1 \big) x + 1 $.



Letting $ x = y = 0 $ in the functional equation, one gets $ f ( 0 ) = 0 $ or $ f ( 0 ) = 1 $. If $ f ( 0 ) = 0 $ then letting $ y = 0 $ in the functional equation one can find out that $ f $ is the constant zero function. So from now on it's assumed that $ f ( 0 ) = 1 $.



If $ f $ is differentiable, differentiating the functional equation with respect to $ y $, one gets $ f ( x ) f ' \big( x + y f ( x ) \big) = f ( x ) f ' ( y ) $. Letting $ y = 0 $ in the last equation, one gets $ f ( x ) \big( f ' ( x ) - f ' ( 0 ) \big) = 0 $. Now since $ f $ is differentiable at $ 0 $, it is also continuous at $ 0 $ and thus there is a positive real number $ \delta $ such that if $ - \delta < x < \delta $ then $ f ( x ) > 0 $. Hence if $ - \delta < x < \delta $ then $ f ' ( x ) = f ' ( 0 ) $ which shows that on this interval $ f ( x ) = x f ' ( 0 ) + 1 $. For any $ x $ such that $ f ( x ) \ne 0 $, one can substitute $ \frac y { f ( x ) } $ for $ y $ in the functional equation and get $ f ( x + y ) = f ( x ) f \Big( \frac y { f ( x ) } \Big) $. Therefore if $ - \delta f ( x ) < y < \delta f ( x ) $ then $ f ( x + y ) = f ( x ) + y f ' ( 0 ) $.




I couldn't go further and I also couldn't avoid the differentiability condition.


Answer



Here is another solution that classifies all continuous solution. During the proof we exclude two trivial solutions $f \equiv 0$ and $f \equiv 1$ to avoid unnecessary case-division.



(Disclaimer: I heavily borrowed Tob Ernack's injectivity argument but tried a simpler proof. Although I have an independent proof, it is neither elegant nor shorter.)




Step 1. $Z: = f^{-1}(\{0\})$ is a non-empty closed interval.





$Z$ is obviously a closed set. Next, whenever $f(x) \neq 1$ we have



$$ f\left(\tfrac{x}{1 - f(x)}\right)
= f\left(x + \tfrac{x}{1 - f(x)}f(x)\right)
= f(x)f\left(\tfrac{x}{1 - f(x)}\right). $$



and hence $\frac{x}{1 - f(x)} \in Z$. Then the assumption $f \not\equiv 1$ shows that $Z \neq \varnothing$. Finally, if $a, b \in Z$, then



$$f(x+af(x)) = f(x)f(a) = 0 \qquad \forall x \in \Bbb{R}$$




and the function $g(x) = x+af(x)$ takes values only in $Z$. Since $g(a) = a$ and $g(b) = b$, it follows from intermediate value theorem that $[a, b] \subseteq Z$.




Step 2. $f(x) = f(y) \neq 0$ implies $x = y$.




Let $p = (y-x)/f(x)$. From



$$ f(x) = f(y) = f(x+pf(x)) = f(x)f(p),$$




we have $f(p) = 1$. Now assume that $p \neq 0$ Then



$$f(t+p) = f(p+tf(p)) = f(t)f(p) = f(t) \qquad \forall t \in \Bbb{R}$$



and hence $f$ is periodic. By Step 1, this implies that $Z$ is an interval which is unbounded in both direction, so $Z = \Bbb{R}$, contradicting the assumption $f \not\equiv 0$.




Step 3. There exists $a \in \Bbb{R}$ such that $f(x) = 1 + ax$ for $x \in \Bbb{R}\setminus Z$.





Use the assumption $f \not\equiv 0$ to choose $c \neq 0$ such that $f(c) \neq 0$ and let $a = \frac{f(c) - 1}{c}$. Then for any $x \notin Z$,



$$ f(x + cf(x)) = f(x)f(c) = f(c+xf(c)) \neq 0 $$



and so $x + cf(x) = c + xf(c)$. Solving this equation gives $f(x) = 1 + ax$.




Step 4. Except for the trivial solution $f \equiv 0$, only 2 types of solutions are possible:





  • Case 1. $f(x) = 1+ax$ for some $a \in \Bbb{R}$

  • Case 2. $f(x) = \max\{1+ax, 0\} $ for some $a \in \Bbb{R}$




Assume that $f \not\equiv 0$. Then it follows from $f(x) = f(x+0\cdot f(x)) = f(x)f(0)$ that $f(0) = 1$. This together with Step 3 shows that only Case 1 and 2 are possible solutions to the functional equation. We check that both cases are indeed solutions.



Since Case 1 is easy to verify, we focus on Case 2. Also, in Case 2, notice that $x \notin Z$ if and only if $1+ax > 0$. Then





  • If $x \in Z$, then $f(x+yf(x)) = f(x) = 0 = f(x)f(y)$.


  • If $x \notin Z$, then $x+yf(x) = x+y(1+ax) = x+y+axy$. So



    \begin{align*}
    x+yf(x) \notin Z
    &\quad \Leftrightarrow \quad 1+a(x+y+axy) = (1+ax)(1+ay) > 0 \\
    &\quad \Leftrightarrow \quad 1+ay > 0.
    \end{align*}




    From this it is easy to check that $f(x+yf(x)) = f(x)f(y)$ holds for all $y$.




Therefore $f(x) = \max\{0,1+ax\}$ is also a solution of the functional equation.


integration - calculus essay assistance


I am writing an essay for my calculus class and one of the requirements to meet within the essay is to demonstrate an understanding of integration by explaining a metaphor that uses integration.


This is the passage that I think meets that requirement but I am not sure if I should expand more on integration just to be sure:




To a person familiar with integration attempting to relate the metaphor back to math, this statement likely brings to mind images of their first calculus instructor drawing rectangles below a function when showing the class how to calculate the area under a curve. The reason Tolstoy’s statement conjures this reminiscent math memory to is because the two concepts being discussed are abstractly identical. Just as the wills of man that direct the compass of history are innumerable, so are the number of rectangles that are required to be summed to get an exact measurement of area under a curve. Despite the impossibility of calculating an infinite amount of something we must still calculate some amount of it if we wish to obtain the valuable information an approximation can provide.



For reference, here is the metaphor I am writing about:



"The movement of humanity, arising as it does from innumerable arbitrary human wills, is continuous. To understand the laws of this continuous movement is the aim of history. . . . Only by taking infinitesimally small units for observation (the differential of history, that is, the individual tendencies of men) and attaining to the art of integrating them (that is, finding the sum of these infinitesimals) can we hope to arrive at the laws of history"



Could anyone provide some feedback? thanks!


Answer



In my opinion, if this is a serious assignment, then it would be a very difficult one for most students. In order to write something really solid, one needs to (i) have strong general essay-writing skills (this is an unusually difficult topic), (ii) have a very solid theoretical grasp of calculus in order to be able to compare metaphors with theorems and (iii) be able to merge the humanities stuff in (i) with the math stuff in (ii) in a coherent and plausible way. It's a lot to ask!


Since you have found Tolstoy's integration metaphor, I should probably mention that Stephen T. Ahearn wrote a 2005 article in the American Mathematical Monthly on this topic. (His article is freely available here.) Ahearn's article is quite thorough: I for instance would have a tough time trying to write a piece on this topic going beyond what he has already written. (And the fact that I've never read War and Piece is not exactly helping either...) If the assignment is "sufficiently serious", I would recommend that you pick some other integration metaphor to explain. (Exactly how one comes across "integration metaphors" is already not so clear to me, but the internet can do many magical things, probably including this...)



I should say though that in the United States at least it would be a very unusual calculus class that would require a student to complete such an assignment and be really serious about it, as above. (A part of me would really like to assign such an essay in my calculus class, but I think the results would be...disappointing.) If as you say the goal is to demonstrate knowledge of integration, then you should indeed concentrate on that. As ever, it couldn't hurt to talk to your instructor and get more specific information about this assignment: e.g. what is the suggested length of the essay? What sort of places does s/he have in mind for finding such a metaphor? Could you create your own metaphor? And so on.


In summary, if you put this question to us (at present the majority of the "answerers" are advanced mathematics students or math researchers) I fear you're setting yourself up to get picked on. It's probably best to clarify exactly what you need to do: it may not be so much, and it might just be worth taking a crack at it (as you've done) and seeing if that will be sufficient for the instructor.


P.S.: I have read some of Tolstoy's other works (especially Anna Karenina) and nothing math-related springs to mind. However, Dostoyevsky's Notes from Underground has some fun mathy material, although maybe not integration per se. I could imagine writing an ironic piece on whether integration (specifically, explicitly finding anti-derivatives) is as hard-scientific and deterministic as Dostoyevsky's view of mathematics is in this book, or whether the "art of finding antiderivatives" is messy and uncertain like the human condition. But, you know, this could be a failing essay!


Wednesday, July 26, 2017

linear algebra - Characteristic polynomial of a matrix 7x7?


Avoiding too many steps, which is the characteristic polynomial of this matrix 7x7? And why?


\begin{pmatrix} 5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\\5&5&5&5&5&5&5\end{pmatrix}


Answer



As it was stated in the commentaries, the rank of this matrix is $1$; so it will have $6$ null eigenvalues, which means the characteristic polynomial will be in the form:



$p(\lambda)=\alpha\,\lambda^6(\lambda-\beta) = \gamma_6\,\lambda^6 +\gamma_7\,\lambda^7$


Using Cayley-Hamilton:


$p(A)=\gamma_6\,A^6+\gamma_7\,A^7 =0$


Any power of this matrix will have the same format, a positive value for all elements.


$B=\begin{bmatrix}1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\\1&1&1&1&1&1&1\end{bmatrix}$


$A = 5\,B$


$A^2 = 5^2\,7\,B$


$...$


$A^6 = 5^6\,7^5\,B$


$A^7=5^7\,7^6\,B$



$p(A) = (\gamma_6+35\,\gamma_7)\,B=0\Rightarrow\gamma_6=-35\gamma_7$


So we have: $\alpha=\gamma_7$ and $\beta = 35$


$p(\lambda)=\alpha\,\lambda^6(\lambda-35)$


linear algebra - connection between determinants



Suppose A is an nxn matrix with real entries and R is its row reduced echelon form. Using information on elementary matrices, explain the connection between det(A) and det(R). Note: you may use the fact that if M,N are two square matrices of the same size then det(MN)= det(M)det(N).



The only thing that is coming to my mind is that the A*R=A^-1, but that doesn't have anything to do with the determinant. Or the sum of the diagonal within the row reduced form is the determinant of A and if any elementary operations happens within A it is also done in R which would change the sum of the diagonal. Can someone point me in the right direction


Answer



Just use the fact that when you row reduce a matrix $A$ you can write $R = E_k E_{k-1} \cdots E_{2}E_{1}A$ where the $E_i$ are the elementary matrices. Then you have; $$Det(R) = Det(E_k) \cdots Det(E_1) \cdot Det(A)$$



abstract algebra - Greatest common divisor in the Gaussian Integers



Let $a$ and $b$ be integers. Prove that their greatest common divisor in the ring of integers is the as their greatest common divisor in the ring of Gaussian Integers.




Ring of Gaussian Integers is:



$\mathbb{Z}[i]=\{a+ib: a,b\in \mathbb{Z}\} $



Attempt at proof:



Suppose that the GCD (greatest common divisor) of $a$ and $b$ is $d$. Then for any other common divisor of $a$ and $b$, say $e$, we must have that $e$ divides $d$. This is the definition of greatest common divisor.



Extend to Gaussian Integers. Suppose that $x+iy$ divides $a$ and $b$. That is:




$a=(x+iy)(n_0+in_1)$



$b=(x+iy)(m_0+im_1)$



Then I need to show that $x+iy$ divides $d$. From here I don't know where to go, I could write:
$a=Nd=(x+iy)(n_0+in_1)$ for some integer $N$.
But then I don't know that $N$ will divide $n_0+in_1$



Thanks for any help!



Answer



Remember that $d$ is a greatest common divisor of $a$ and $b$ if and only if:




  1. $d|a$ and $d|b$; and

  2. If $c|a$ and $c|b$, then $c|d$.



Let $a$ and $b$ be integers, and let $d$ be their greatest common divisor as integers. Then we know that $d$ satisfies the two properties above, where "divides" means "divides in $\mathbb{Z}$; and $c$ in point 2 is an arbitrary integer.




The thing that makes this very easy is that in the integers, we can express a gcd as a linear combination: we know that there exist integers $\alpha$ and $\beta$ such that $d=\alpha a + \beta b$.



Thus, if $\mathbf{x}$ is a Gaussian integer that divides $a$ and divides $b$ (in the Gaussian integers), then it also divides $\alpha a$, it also divides $\beta b$, and hence also divides $\alpha a + \beta b$. Thus, if $\mathbf{x}$ divides $a$ and divides $b$ in the Gaussian integers, then it divides $d$ in the Gaussian integers.



Thus, $d$ is a greatest common divisor for $a$ and $b$ in the Gaussian integers, since it satisfies the two requirements to be a greatest common divisor:




  1. $d|a$ and $d|b$ (in the Gaussian integers); and

  2. If $c$ is a Gaussian integer that divides both $a$ and $b$, then it divides $d$ (in the Gaussian integers).




Thus, $d$ is a greatest common divisor for $a$ and $b$ in $\mathbb{Z}[i]$.



No need to muck about with norms or with products.



P.S. Note that in the integers we can talk about "the" greatest common divisor because, even though every pair of numbers (except for $0$ and $0$) has two greatest common divisors ($d$ and $-d$), there is a natural way to "choose" one of them. In the Gaussian integer each pair of numbers (again, except $0$ and $0$) has four greatest common divisors (if $d$ is one, then so are $-d$, $id$, and $=id$); there is no universally accepted way of deciding which one would be "the" greatest common divisor, so generally you should talk about a greatest common divisor rather than the greatest common divisor.


nonstandard analysis - When "magnifying infinitesimals" why dont they have curvature ? (non standard) Infinitesimal calculus



Im reading https://www.math.wisc.edu/~keisler/calc.html.




If you open up the chapter $2$ pdf...
The $2$ diagrams
(1st on page $14$ of the pdf (not the text book),
2nd on page $15$) have me confused. Keisler says the accurate picture is the 2nd... but to me this doesn't make sense... (i.e. I think the 1st one makes sense).



Infinitesimals (though non commensurable with reals) have the same behavior and properties as reals... So why wouldn't the infinitesimal "magnified" also have curvature?



Infinitesimals are just really tiny fractions... and (e.g.) for $y=x^{1\over 2}$ ($0 < x < 1$; where $x$ is real) the fractions have curvature. So then for $y=x^{1\over 2}$ ($0 < x <$ smallest finite real; i.e. where $x$ is hyper-real) why wouldn't this have curvature ?



When I start putting in values of ${1 \over H}$ (where $H$ is infinite), ${2 \over H}$, ${3 \over H}$ into the function above why would this not have curvature?



Answer



Look at what he says immediately after giving the Increment Theorem in the form $\Delta y=dy+\epsilon\,dx$:




The Increment Theorem can be explained graphically using an infinitesimal microscope. Under an infinitesimal microscope, a line of length $\Delta x$ is magnified to a line of unit length, but a line of length $\epsilon\,\Delta x$ is only magnified to an infinitesimal length $\epsilon$.




He goes on to point out that when $f'(x)$ exists, the following two things are true:






  • The differential $dy$ and the increment $\Delta y=dy+\epsilon\,dx$ are so close to each other that they cannot be distinguished under an infinitesimal microscope.

  • The curve $y=f(x$ and the tangent line at $(x,y)$ are so close to each other that they cannot be distinguished under an infinitesimal microscope; both look like a straight line of slope $f'(x)$.




Figure $2.2.3$ shows the operation of a single infinitesimal microscope, one that expands the infinitesimal $dx=\Delta x$ to unit length. In that picture you can clearly see $\Delta y-dy$: it’s roughly $40$% of $dx$. But in fact $\Delta y-dy=\epsilon\,dx$, where $\epsilon$ is an infinitesimal, if the expanded $\Delta x(=dx)$ in the diagram has unit length, we shouldn’t be able to see $\Delta y-dy$ at all: it’s infinitesimal. Moreover, that means that we should see no separation between the tangent line and the curve; and since the tangent line is unquestionably straight, that means that what little of the curve appears in the infinitesimal microscope should also appear to be straight. In short, Figure $2.2.3$ grossly exaggerates that difference between $dy$ and $\Delta y$ and the visual curvature of the part of $y=f(x)$ that appears in the microscope.



In Figure $2.2.4$, on the other hand, the first infinitesimal microscope expands $dx=\Delta x$ to unit length and labels only $\Delta y$, not $dy$. If $dy$ were labelled in that picture, it would be visually identical to $\Delta y$, because at the scale of that microscope the difference $\Delta y-dy$ is infinitesimal (and hence invisible): $\Delta x$ appears in that microscope to be of unit length, so $\epsilon\,\Delta x$ appears to be of length $\epsilon$, too small to be seen. And the tangent line and the curve are shown as being visually indistinguishable at that scale, as we just saw that they should be.




The second infinitesimal microscope then expands $\epsilon\,\Delta x$ to unit length, so that we can see it. That means that we can also see the curve and the tangent line as distinct lines. Moreover, at that magnification the separation between them doesn’t change visibly over the field of view of the microscope, so both look straight.



We’re actually dealing with three size levels here: ordinary finite real numbers (e.g., $1$), infinitesimals comparable in size to $dx$, and infinitesimals comparable in size to $\epsilon\,dx$. Where $dx$ is infinitesimal compared with $1$, say, $\epsilon\,dx$ is infinitesimal not just in comparison with $1$, but in comparison with the infinitesimal $dx$.


Tuesday, July 25, 2017

combinatorics - solving power series expression?



Can anyone show how to simplify $$\frac{\sum_{n=1}^\infty n\frac{n^{n-1}}{n!}x^{n-1}}{(-1+\sum_{n=1}^\infty \frac{n^{n-1}}{n!}x^n)^2}$$


Answer



small hint




the derivative of $$\frac{1}{1-f}$$ is



$$\frac{f'}{(-1+f)^2}.$$


Monday, July 24, 2017

algebra precalculus - Total time spent travelling, given distance and speed functions

I have the following situtation:



An object is traveling a certain (known) distance in a straight line. The object starts at rest, accelerates to its preset maximum speed then spends some time cruising at that speed, and finally decelerates to a halt.




The object does not follow Newton's second law of motion, but rather obeys the following equations:



While cruising, it's moving normally, so d = v*t and while accelerating: d = e^(k*t) and v = k * e^(k*t), where e is the natural log base, t is time, k is a constant, d is distance traveled, and v is speed. When decelerating, the distance and speed equations are the same, but with a negative time component and are offset by the time and distance spent accelerating and cruising. The value of k for deceleration is also different from the value for acceleration (it's the acceleration k divided by 3, or just 2 if k/3 is greater than 2, if that's important).



I am trying to calculate the total travel time given the distance to travel, the maximum speed, and the acceleration and deceleration constants. The calculation is simple in the case when the distance is great enough that the object reaches its maximum speed - I can calculate t for how long it will take it to reach max speed and then back to 0 from the speed formula; then I get the distance traveled while accelerating and decelerating from the distance equation, with the remaining distance traveled at cruising speed.



I am having difficulties figuring out when the object will begin decelerating if the distance is smaller than the distance required to accelerate to max speed and decelerate back to zero.



So, how would one go about intersecting the speed graphs for acceleration and deceleration to figure out the highest speed it will achieve over such a run? (I think I can figure out the total time from there)

calculus - Limit $lim_{x to infty}x^e/e^x$



I had this problem on my math test, and was stuck on it for quite some time.




$\lim_{x \to \infty}x^e/e^x$



I knew that the bottom grew faster than the top, but I didn't know how to prove it. I wrote that the limit approaches 0, but I am not sure how to prove it mathematically.


Answer



Show first that it is in indeterminate form.



Then perform L'Hopital's rule, differentiating the top and bottom.



$$\lim_{x \to \infty} \frac{x^e}{e^x}= \lim_{x \to \infty} \frac{ex^{e-1}}{e^x}=e(e-1)(e-2)\lim_{x \to \infty} \frac{1}{x^{3-e}e^x}=0$$


real analysis - Can someone help me with proving the convergence of a sequence written in this form?





From this exercise:




If $s_1 = \sqrt{2}$, and $$s_{n+1} = \sqrt{2+\sqrt{s_n}} \quad (n = 1,2,3,\dots),$$ prove that $\{s_n\}$ converges, and that $s_n< 2$ for $n = 1,2,3,\dots$.





in the book Principles of Mathematical Analysis, I find this book very obscure about many concepts, so I really need some help.
Can someone help me with proving the convergence of a sequence written in this form?


Answer



If such a limit exists we must have $$l=\sqrt{2+\sqrt l}$$or $$l^2=2+\sqrt l$$define $e_n=s_n-l$. We show that $e_n\to 0$. We have $$s_{n+1}=\sqrt{2+\sqrt{s_n}}$$therefore $$e_{n+1}{=\sqrt{2+\sqrt{s_n}}-l\\=\dfrac{2+\sqrt{s_n}-l^2}{\sqrt{2+\sqrt{s_n}}+l}\\=\dfrac{2+\sqrt{s_n}-l^2}{\sqrt{2+\sqrt{s_n}}+l}\\=\dfrac{\sqrt{s_n}-\sqrt l}{\sqrt{2+\sqrt{s_n}}+l}\\=\dfrac{e_n}{(\sqrt{2+\sqrt{s_n}}+l)(\sqrt s_n+\sqrt l)}}$$therefore $$|e_{n+1}|=|\dfrac{e_n}{(\sqrt{2+\sqrt{s_n}}+l)(\sqrt s_n+\sqrt l)}|=\dfrac{|e_n|}{(\sqrt{2+\sqrt{s_n}}+l)(\sqrt s_n+\sqrt l)}\le \dfrac{|e_n|}{l\sqrt l}\le\dfrac{|e_1|}{(l\sqrt l)^{n}}$$since both $l$ and $\sqrt l$ are non-negative and $$\sqrt{2+\sqrt{s_n}}+l>l\\\sqrt s_n+\sqrt l>\sqrt l$$also the last inequality is attained by iteratively applying the one before that i.e.$$|e_{n+1}|<\dfrac{|e_n|}{l\sqrt l}<\dfrac{|e_{n-1}|}{(l\sqrt l)^2}<\cdots<\dfrac{|e_1|}{(l\sqrt l)^n}$$ which means that $|e_{n}|\to 0$ or $e_n\to 0$


real analysis - Classifying Functions of the form $f(x+y)=f(x)f(y)$











The question is: is there a nice characterization of all nonnegative functions $f:\mathbb{R}\rightarrow \mathbb{R}$ such that $f(x+y)=f(x)f(y)$.



If $f$ is continuously differentiable, then I can prove that $f$ is exponential, but I don't know this in general. For what it's worth, in my particular case, I can assume that $f$ is right continuous, i.e. $\lim _{x\to c^+}f(x)=f(c)$, but that is all. From this, what can I deduce about the form of $f$?



Thanks much!


Answer



Given any function $g\colon \mathbb{R}\to\mathbb{R}$ such that $g(x+y)=g(x)+g(y)$, you obtain a function of the type you want by composing with an exponential function, $z\mapsto e^{az}$, since $f(x) = e^{ag(x)}$ satisfies
$$f(x+y) = e^{ag(x+y)} = e^{ag(x)+ag(y)} = e^{ag(x)}e^{ag(y)} = f(x)f(y).$$




Conversely, any function $f\colon\mathbb{R}\to\mathbb{R}$ such that $f(x+y) = f(x)f(y)$ must be nonnegative, since $f(x) = f(\frac{1}{2}x+\frac{1}{2}x) = \left(f(\frac{1}{2}x)\right)^2$. If $f(x)=0$ for some $x$, then $f(y) = 0$ for all $y$, since $f(y) = f(x+(y-x)) = f(x)f(y-x) = 0$. So one solution is $f(x)=0$ for all $x$. If $f(x)\gt 0$ for all $x$, then composing $f(x)$ with a logarithm function gives a function $g\colon\mathbb{R}\to\mathbb{R}$ such that $g(x+y) = g(x)+g(y)$.



So your question reduces to determining the functions $g\colon\mathbb{R}\to\mathbb{R}$ that are additive. Such functions satisfy $g(q) = g(1)q$ for all $q\in\mathbb{Q}$. Under very mild conditions one can conclude that the function is of the form $g(x) = ax$ with $a=g(1)$ for all $x\in\mathbb{R}$, but if you assume the axiom of choice, there are functions that are not of this form: pick any Hamel basis $\beta$ for $\mathbb{R}$ as a vector space over $\mathbb{Q}$, and fix $\alpha\in\beta$, $\alpha\notin\mathbb{Q}$. Define $g\colon\mathbb{R}\to\mathbb{R}$ by mapping $\alpha$ to $1$ and all other basis elements to $0$, and extend $\mathbb{Q}$-linearly. This map is additive, but not of the form $g(x)=g(1)x$.



(Any additive map from $\mathbb{R}$ to $\mathbb{R}$ must be $\mathbb{Q}$-linear, of course).


elementary number theory - Proving $sqrt 3$ is irrational.



There is a very simple proof by means of divisibility that $\sqrt 2$ is irrational. I have to prove that $\sqrt 3$ is irrational too, as a homework. I have done it as follows, ad absurdum:



Suppose

$$\sqrt 3= \frac p q$$



with $p/q$ irreducible, then



$$\begin{align}
& 3q^2=p^2 \\
& 2q^2=p^2-q^2 \\
&2q^2=(p+q)(p-q) \\
\end{align}$$




Now I exploit the fact that $p$ and $q$ can't be both even, so it is the case that they are either both odd, or have different parity. Suppose then that $p=2n+1$ and $q=2m+1$



Then it is the case that



$$\begin{align}
&p-q=2(n-m) \\
&p+q=2(m+n+1) \\
\end{align}$$



Which means that




$$\begin{align}
&2q^2=4(m-n)(m+n+1) \\
&q^2=2(m-n)(m+n+1) \\
\end{align}$$



Then $q^2$ is even, and so is then $q$, which is absurd. Similarly, suppose
$q=2n$ and $p=2m+1$.



Then $p+q=2(m+n)+1$ and $p-q=2(m-n)+1$. So it is the case that




$$\begin{align}
&2q^2=(2(m-n)+1)(2(m+n)+1)\\
&2q^2=4(m^2+m-n^2)+1 \\
\end{align}$$



So $2q^2$ is odd, which is then absurd.







Is this valid?


Answer



It works, but can be simplified: $\rm\:mod\ 2\!: p\equiv p^2 = 3q^2 \equiv q,\:$ so $\rm\:p,q,\:$ being coprime, are odd. $\rm\:mod\ 4\!:\ odd\equiv \pm 1,\:$ so $\rm\:odd^2\equiv 1,\:$ so $\rm\: 1\equiv p^2 = 3q^2\equiv 3\ \Rightarrow\ 4\:|\:3-1\:\Rightarrow\Leftarrow$


Sunday, July 23, 2017

calculus - Limits with trigonometric functions without using L'Hospital Rule.

I want to find the limits $$\lim_{x\to \pi/2} \frac{\cos x}{x-\pi/2} $$
and
$$\lim_{x\to\pi/4} \frac{\cot x - 1}{x-\pi/4} $$



and
$$\lim_{h\to0} \frac{\sin^2(\pi/4+h)-\frac{1}{2}}{h}$$
without L'Hospital's Rule.



I know the fundamental limits $$\lim_{x\to 0} \frac{\sin x}{x} = 1,\quad \lim_{x\to 0} \frac{\cos x - 1}{x} = 0 $$




Progress



Using $\cos x=\sin\bigg(\dfrac\pi2-x\bigg)$ I got $-1$ for the first limit.

calculus - How do I derive $1 + 4 + 9 + cdots + n^2 = frac{n (n + 1) (2n + 1)} 6$





I am introducing my daughter to calculus/integration by approximating the area under y = f(x*x) by calculating small rectangles below the curve.


This is very intuitive and I think she understands the concept however what I need now is an intuitive way to arrive at $\frac{n (n + 1) (2n + 1)} 6$ when I start from $1 + 4 + 9 + \cdots + n^2$.


In other words, just how came the first ancient mathematician up with this formula - what were the first steps leading to this equation? That is what I am interested in, not the actual proof (that would be the second step).


Answer



Same as you can prove sum of n = n(n+1)/2 by



*oooo
**ooo
***oo
****o

you can prove $\frac{n (n + 1) (2n + 1)} 6$ by building a box out of 6 pyramids:


enter image description hereenter image description hereenter preformatted text here


Sorry the diagram is not great (someone can edit if they know how to make a nicer one). If you just build 6 pyramids you can easily make the n x n+1 x 2n+1 box out of it.


  • make 6 pyramids (1 pyramid = $1 + 2^2 + 3^2 + 4^2 + ...$ blocks)

  • try to build a box out of them


  • measure the lengths and count how many you used.. that gives you the formula

Using these (glued) enter image description here


derivatives - How to prove this theorem about differentiability of a multivariable function?

The theorem says that:





A function $f: \mathbb{R}^2 \to \mathbb{R}$ is differentiable at $(x_0, y_0)$ if its partial derivatives $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ are continuous at $(x_0, y_0)$.




How do I prove this? I know that for a function to be differentiable, the condition is:



$$ \displaystyle \lim_{\lVert h,k \rVert \to 0} \dfrac{f(x_0+h, y_0+k) - f(x_0, y_0) - h\,\frac{\partial f}{\partial x}\rvert_{(x_0, y_0)} - k\,\frac{\partial f}{\partial y}\rvert_{(x_0, y_0)}}{\lVert h,k \rVert} = \lim_{\lVert h,k \rVert \to 0} \mathcal O(h,k) = 0 $$



The problem is that the definition of differentiability is using the values of partial derivatives at the point itself, and not the function. I am not understanding how I can "link" that statement to the definition of continuity of partial derivatives, which are functions themselves. I am not even getting where I could start!

Saturday, July 22, 2017

calculus - Dirichlet integral.




I want to prove $\displaystyle\int_0^{\infty} \frac{\sin x}x \,\mathrm{d}x = \frac \pi 2$, and $\displaystyle\int_0^{\infty} \frac{|\sin x|}x \,\mathrm{d}x \to \infty$.




And I found in wikipedia, but I don't know, can't understand. I didn't learn differential equation, laplace transform, and even inverse trigonometric functions.



So tell me easy, please.


Answer



About the second integral: Set $x_n = 2\pi n + \pi / 2$. Since $\sin(x_n) = 1$ and
$\sin$ is continuous in the vicinity of $x_n$, there exists $\epsilon, \delta > 0$ so that $\sin(x) \ge 1 - \epsilon$ for $|x-x_n| \le \delta$. Thus we have:
$$\int_0^{+\infty} \frac{|\sin x|}{x} dx \ge 2\delta\sum_{n = 0}^{+\infty} \frac{1 - \epsilon}{x_n} = \frac{2\delta(1-\epsilon)}{2\pi}\sum_{n=0}^{+\infty} \frac{1}{n + 1/4} \rightarrow \infty $$


radicals - How to prove that $sqrt 3$ is an irrational number?








I know how to prove $\sqrt 2$ is an irrational number. Who can tell me that why $\sqrt 3$ is a an irrational number?

real analysis - How to define a bijection between $(0,1)$ and $(0,1]$?




How to define a bijection between $(0,1)$ and $(0,1]$?
Or any other open and closed intervals?





If the intervals are both open like $(-1,2)\text{ and }(-5,4)$ I do a cheap trick (don't know if that's how you're supposed to do it):
I make a function $f : (-1, 2)\rightarrow (-5, 4)$ of the form $f(x)=mx+b$ by
\begin{align*}
-5 = f(-1) &= m(-1)+b \\
4 = f(2) &= m(2) + b
\end{align*}
Solving for $m$ and $b$ I find $m=3\text{ and }b=-2$ so then $f(x)=3x-2.$



Then I show that $f$ is a bijection by showing that it is injective and surjective.



Answer



Choose an infinite sequence $(x_n)_{n\geqslant1}$ of distinct elements of $(0,1)$. Let $X=\{x_n\mid n\geqslant1\}$, hence $X\subset(0,1)$. Let $x_0=1$. Define $f(x_n)=x_{n+1}$ for every $n\geqslant0$ and $f(x)=x$ for every $x$ in $(0,1)\setminus X$. Then $f$ is defined on $(0,1]$ and the map $f:(0,1]\to(0,1)$ is bijective.



To sum up, one extracts a copy of $\mathbb N$ from $(0,1)$ and one uses the fact that the map $n\mapsto n+1$ is a bijection between $\mathbb N\cup\{0\}$ and $\mathbb N$.


Friday, July 21, 2017

modular arithmetic - A positive integer (in decimal notation) is divisible by 11 $ iff $ ...



(I am aware there are similar questions on the forum)



What is the Question?



A positive integer (in decimal notation) is divisible by $11$ if and only if the
difference of the sum of the digits in even-numbered positions and the sum of digits in odd-numbered positions is divisible by $11$.





For example consider the integer 7096276.



The sum of the even positioned digits is $0+7+6=13.$ The sum of the odd positioned
digits is $7+9+2+6=24.$ The difference is $24-13=11$, which is divisible by 11.



Hence 7096276 is divisible by 11.





(a)



Check that the numbers 77, 121, 10857 are divisible using this fact, and that 24 and 256 are not divisible by 11.



(b)



Show that divisibility statement is true for three-digit integers $abc$.
Hint: $100 = 99+1$.



What I've Done?




I've done some research and have found some good explanations of divisibility proofs. Whether that be 3,9 or even 11. But...the question lets me take it as fact so I don't need to prove the odd/even divisible by 11 thing.



I need some help on the modular arithmetic on these.



For example... Is 77 divisible by 11? $$7+7 = 14 \equiv ...$$



I don't know what to do next.



Thanks very much, and I need help on both (a) and (b).



Answer



In order to apply the divisibility rule, you have to distinguish between odd and even positioned digits. (It doesn't matter how you count.)



Example:
In 77 the first position is odd, the second even, so you would have to calculate $7-7=0$, which is divisible by 11.



Now it should be easy for you to understand what you try to prove in (b): If a,b,c are three digits abc is the number $100a+10b+c$. You know what it means to say that this number is divisible by 11. You have to prove that $$11\vert (a+c)-b \Leftrightarrow 11\vert 100a+10b+c$$ or with modular arithmetic
$$ (a+c)-b \equiv 0 \pmod{11}\Leftrightarrow 100a+10b+c\equiv 0 \pmod {11}\; .$$
I don't want to spoil the fun so I leave it there.




P.S. Sorry, I hadn't noticed the answer posted in the meantime.


real analysis - Convergence of $sum_limits{n=1}^{infty} (-1)^n sin(frac{x}{n})$




Show that $\displaystyle\sum_{n=1}^{\infty} (-1)^n \sin \left(\frac{x}{n}\right)$ converges uniformly on every finite interval.



At first thought, I try to find a bound, $M_n$, for the sine term such that $\displaystyle\sum_{n=1}^{\infty} M_n < \infty$.



However, the best I can find is that $\sin(\frac x n) \leq \frac R n + \frac{1}{n^3}$ for $x \in [-R,R]$



And now, I totally don't know how to proceed.



Thanks in advance.


Answer




Hint. One may recall that, by a Taylor series expansion, as $u \to 0$, we have
$$
\sin u= u+O(u^3)
$$ giving, as $n \to \infty$,
$$
\sin \frac{x}{n}= \frac{x}{n}+O_x\left(\frac{1}{n^3}\right)
$$ and, for any $N\ge1, M \ge1,$
$$
\sum_{n=N}^M(-1)^n\sin \frac{x}{n}=x \cdot \sum_{n=N}^M\frac{(-1)^n}{n}+\sum_{n=N}^M O_x\left(\frac{1}{n^3}\right)
$$ one may deduce that

$$
\left|\sum_{n=N}^M(-1)^n\sin \frac{x}{n}\right|\le |x| \cdot\left|\sum_{n=N}^M\frac{(-1)^n}{n}\right|+\sum_{n=N}^M\left|O_x\left(\frac{1}{n^3}\right)\right|
$$ which yields the uniform convergence of the given series over each compact set $[-R,R]$, $R>0$.


calculus - Determine the following limit as x approaches 0: $frac{ln(1+x)}x$


$$\lim_{x\to 0} \frac{\ln(1+x)}x$$


The process I want to take to solving this is by using the definition of the limit, but I am getting confused. ( without l'hopitals rule)


$$\lim_{h \to 0} \frac{f(x+h) - f(x)}h$$



$$\lim_{h \to 0} \frac{\frac{\ln (1+x+h)}{x+h} - \frac{\ln(1+x)}x}h$$


$$\lim_{h \to 0} \frac{x\ln(1+x+h) - (x+h)\ln (1+x)}{hx(x+h))}$$


At this point I get confused because I know the answer is $1$, but I am not getting this answer through simplification of my formula.


Answer



You are talking about L'Hôpital's rule, so I assume you already know how to differentiate the logarithm. Now note, that


$$\frac{\log(x+1)}x = \frac{\log(x+1)-\log(1)}{(x+1)-1}$$


Thus


$$\lim_{x\to0}\frac{\log(x+1)}x = \lim_{x\to0}\frac{\log(x+1)-\log(1)}{(x+1)-1}=\left(\log(x)\right)^\prime_{x=1}=\left.\frac{1}x\right|_{x=1}=1$$


(This is not by using L'Hôpital's rule but only by using the definition of derivative and knowing the derivative of $\log(x)$)


reference request - how do we know that integral is non-elementary?











Is there a condition that states that the indefinite integration is non-elementary?


Answer



There is a decision procedure called the Risch algorithm that will either tell you that the intergral is non-elementary, or produce an elementary anti-derivative. It is not an easy algorithm to execute, or even implement in a computer algebra system (although the latter has been done), so there is no hope of finding an easy condition for the existence of an anti-derivative.


complex numbers - What did I do wrong? 1 = √1 = √(-1)(-1) = √(-1) √(-1) = i.i = i² = -1

I'm a simple man living my life and enjoying mathematics now and then. Today during lunch my friend asked me about complex numbers and $i$. I told him what I knew and we went back to work.


After work I decided to read up on complex numbers and I somehow ended up with this equation:


$$ 1 = \sqrt 1 = \sqrt{(-1)(-1)} = \sqrt{(-1)} \ \sqrt{(-1)} = i \cdot i = i² = -1 $$


Somehow I got that $1 = -1.$ I can't see a contradiction. Did I just break math? What happened? Where is my mistake?

Thursday, July 20, 2017

integration - Gaussian integral variant $int_{-infty}^infty frac{e^{-x^2}}{1+a e^{-x}} dx$



I have been trying to compute this integral
$$\int_{-\infty}^{\infty} \frac{e^{-x^2}}{1+a e^{-x}} dx$$
quickly and to a high-degree of accuracy.




I have some partial results, for example for $n \in \mathbb N$
$$ \frac{1}{\sqrt\pi} \int_{-\infty}^{\infty} \frac{e^{-x^2}}{1+e^{-(x+n/2)}}dx = \frac{1}{2}\sum_{i=0}^{2n}(-1)^{i} e^{-i(2n-i)/4},$$
and for $a \in (0,1)$ we have
$$ \frac{1}{\sqrt\pi} \int_{0}^\infty \frac{e^{-x^2}}{1+ae^{-x}}dx = \frac{1}{2}\sum_{n=0}^\infty (-1)^n a^n e^{n^2/4} \mathrm{erfc}(n/2)$$
where $\mathrm{erfc}$ is the error function complement fuction
$$ \mathrm{erfc}(x) = 1-\mathrm{erf}(x),$$
which we obtain by using the substitution
$$ \frac{1}{1+ae^{-x}} = \sum_{n=0}^\infty (-1)^n a^n e^{-nx}.$$
This $\mathrm{erfc}$ series does not converge very quickly unless $a$ is very small, and wolframalpha can compute the integral very accurately between say $-100$ and $100$ for various values of $a$, so I must be missing a trick.




Edit
I found an infinite series for the whole integral, but it converges slowly
$$\frac{1}{\sqrt\pi}\int_{-\infty}^\infty \frac{e^{-x^2}}{1+e^{-(x+a)}}dx = \frac{e^{-a^2}}{2}\left[\sum_{n=0}^\infty (-1)^n \mathrm{erfcx}(-a+n/2) + \sum_{n=1}^\infty (-1)^{n+1} \mathrm{erfcx}(a+n/2)\right],$$
where $\mathrm{erfcx}$ is the scaled complement error function
$$ \mathrm{erfcx}(x) = e^{x^2} \mathrm{erfcx}(x) = e^{x^2}(1-\mathrm{erf}(x)).$$


Answer



Since your integrand is bell-shaped and there are no singularities for $a>0$, then the Euler-Maclaurin expansion shows that simple trapezoidal quadrature converges exponentially fast.



Using this code, I was able to evaluate your integral to float precision in 17 integrand evaluations, to double in 33 evaluations, long double in 65, and 50 binary digits in 260. Doubling of the number of correct digits by halving $h$ is observed.




Assuming that your integrand takes 100ns to evaluate, this implies that you can compute your integral to double precision in ~3.3 microseconds.



Here's the code I used



Real a = 2;
auto f = [&](Real t) { return exp(-t*t)/(1+a*exp(-t)); };
Real t_max = sqrt(-log(std::numeric_limits::epsilon()));
Real Q = boost::math::trapezoidal(f, -t_max, t_max, tol);
std::cout << "Q = " << Q << std::endl;



You presumably are using a different programming environment, but the trapezoidal rule is simple enough to code that this shouldn't be a problem.


Finding Real & Distinct solutions in complex numbers for equation $x^2+4x-1+k(x^2+2x+1)=0$.

I was going through my year 12 text book doing complex numbers when in chapter review I was faced with a question I've got no idea how to answer.





Consider the equation $x^2+4x-1+k(x^2+2x+1)=0$. Find the set of real values for $k$ where $k$ $\neq -1$ for which the two solutions of the equation are:
Real & Distinct,
Real & Equal,
Complex with positive real part and non-zero imaginary part




Please help me guys, there is nothing like this in the chapter questions and even my teacher is stumped as the book has the answers but no working out.

Wednesday, July 19, 2017

linear algebra - What are the eigenvalues of matrix that have all elements equal 1?




As in subject: given a matrix $A$ of size $n$ with all elements equal exactly 1.



What are the eigenvalues of that matrix ?



Answer



Suppose $\,\begin{pmatrix}x_1\\x_2\\...\\x_n\end{pmatrix}\,$ is an eigenvector of such a matrix corresponding to an eigenvalue $\,\lambda\,$, then



$$\begin{pmatrix}1&1&...&1\\1&1&...&1\\...&...&...&...\\1&1&...&1\end{pmatrix}\begin{pmatrix}x_1\\x_2\\...\\x_n\end{pmatrix}=\begin{pmatrix}x_1+x_2+...+x_n\\x_1+x_2+...+x_n\\.................\\x_1+x_2+...+x_n\end{pmatrix}=\begin{pmatrix}\lambda x_1\\\lambda x_2\\..\\\lambda x_n\end{pmatrix}$$



One obvious solution to the above is



$$W:=\left\{\begin{pmatrix}x_1\\x_2\\..\\x_n\end{pmatrix}\;;\;x_1+...+x_n=0\right\}\,\,\,,\,\,\lambda=0$$



For sure, $\,\dim W=n-1\,$ (no need to be a wizard to "see" this solution since the matrix is singular and thus one of its eigenvalues must be zero)




Other solution, perhaps not as trivial as the above but also pretty simple, imo, is



$$U:=\left\{\begin{pmatrix}x_1\\x_2\\..\\x_n\end{pmatrix}\;;\;x_1=x_2=...=x_n\right\}\,\,\,,\,\,\lambda=n$$



Again, it's easy to check that $\,\dim U=1\,$ .



Now, just pay attention to the fact that $\,W\cap U=\{0\}\,$ unless the dimension of the vector space $\,V\,$ we're working on is divided by the definition field's characteristic (if you're used to real/complex vector spaces and you aren't sure about what the characteristic of a field is disregard the last comment)



Thus, assuming this is the case, we get $\,\dim(W+U)=n=\dim V\Longrightarrow V=W\oplus U\,$ and we've thus found all the possible eigenvalues there are.




BTW, as as side effect of the above, we get our matrix is diagonalizable.


linear algebra - Proof that $(AA^{-1}=I) Rightarrow (AA^{-1} = A^{-1}A)$




I'm trying to prove a pretty simple problem - commutativity of multiplication of matrix and its inverse.



But I'm not sure, if my proof is correct, because I'm not very experienced. Could you, please, take a look at it?







My proof:




  • We know, that $AA^{-1}=I$, where $I$ is an identity matrix and $A^{-1}$ is an inverse matrix.

  • I want to prove, that it implies $AA^{-1}=A^{-1}A$



\begin{align}

AA^{-1}&=I\\
AA^{-1}A&=IA\\
AX&=IA \tag{$X=A^{-1}A$}\\
AX&=A
\end{align}
At this point we can see, that $X$ must be a multiplicative identity for matrix $A \Rightarrow X$ must be an identity matrix $I$.



\begin{align}
X = A^{-1}A &= I\\
\underline{\underline{AA^{-1} = I = A^{-1}A}}

\end{align}


Answer



You claim is not quite true. Consider the example
\begin{align}
\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0
\end{pmatrix}
\begin{pmatrix}
1 & 0\\

0 & 1\\
0 & 0
\end{pmatrix} =
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}.
\end{align}
Suppose $A, B$ are square matrices such that $AB = I$. Observe
\begin{align}

BA= BIA= BA BA = (BA)^2 \ \ \Rightarrow \ \ BA(I-BA) = 0.
\end{align}
Moreover, using the fact that $AB$ is invertible implies $A$ and $B$ are invertible (which is true only in finite dimensional vector spaces), then it follows
\begin{align}
I-BA=0.
\end{align}



Note: we have used the fact that $A, B$ are square matrices when we insert $I$ between $BA$.


algebra precalculus - Summation of natural number set with power of $m$


Who knows about the summation of this series: $$\sum\limits_{i=1}^{n}i^m $$ where $m$ is constant and $m\in \mathbb{N}$?


thanks


Answer



Look up Faulhaber's formula. See for example http://en.wikipedia.org/wiki/Faulhaber%27s_formula.


summation - Prove by Induction : $sum n^3=(sum n)^2$

I am trying to prove that for any integer where $n \ge 1$, this is true:



$$ (1 + 2 + 3 + \cdots + (n-1) + n)^2 = 1^3 + 2^3 + 3^3 + \cdots + (n-1)^3 + n^3$$



I've done the base case and I am having problems in the step where I assume that the above is true and try to prove for $k = n + 1$.



I managed to get,




$$(1 + 2 + 3 + \cdots + (k-1) + k + (k+1))^2 = (1 + 2 + 3 + \cdots + (k-1) + k)^2 + (k + 1)^3$$



but I'm not quite sure what to do next as I haven't dealt with cases where both sides could sum up to an unknown integer.

calculus - $lim_{nto infty} n^3(frac{3}{4})^n$




I have the sequence $a_n=(n^3)\big(\frac{3}{4}\big)^n$
and I need to find whether it converges or not and the limit.



I took the common ratio which is $\frac{3 (n+1)^3}{4 n^3}$ and since $\big|\frac{3}{4}\big|<1$ it converges.
I don't know how to find the limit from here.


Answer



If $\frac {a_{n+1}} {a_n} \to l$ with $|l|<1$ then $a_n \to 0$. (In fact the series $\sum a_n$ converges).



In this case $\frac {a_{n+1}} {a_n} =(1+\frac 1 n)^{3}(\frac 3 4) \to \frac 3 4$.



real analysis - Show that ${{x_n}}$ is convergent and monotone



Question: For $c>0$, consider the quadratic equation
$$
x^2-x-c=0,\qquad x>0.
$$
Define the sequence $\{x_n\}$ recersively by fixing $x_1>0$ and then, if $n$ is an index for which $x_n$ has been defined, defining
$$
x_{n+1}=\sqrt{c+x_n}.
$$

Prove that the sequence $\{x_n\}$ converges monotonically to the solution of the above equation.



My uncompleted solution: General speaking, the sequence ${\{x_{n+1}}\}$ is a subsequence of ${\{x_n}\}$. Hence, if $\lim_{n \to \infty} x_n = x_s$, then $\lim_{n \to \infty} x_{n+1} = x_s$ as well. So, from sum and productproperties of convergent sequences,
(finally) it follows that $x_s=\sqrt{c+x_s}$ which is equivalent to the mentioned quadratic equation for $x>0$. To show that ${\{x_n}\}$ is monotone, it is enough to show that it is bounded, since ${\{x_n}\}$ is convergent. But I don't know how to show that ${\{x_n}\}$ is bounded.



Thank you in advance for a clear guidance/solution.



EDIT : (by considering first two comments to this question, so) The question is, show that ${\{x_n}\}$ is $1-$ convergent, and, $2-$ monotone.


Answer



Let $f(x)=\sqrt{c+x}$, $x>0$. There is a unique fixed point $x^*$ such that $f(x^*)=x^*$. $x^*$ is in fact the positive solution of $x^2-x-c=0$. The sequence $x_n$ is defined by $x_{n+1}=f(x_n)$.




If $0

If $x^*x>x^*$. From this it is easy to show that if $x_1>x^*$ then $x_n$ is decreasing and bounded below by $x^*$. This implies that $x_n$ converges and the limit is $x^*$.



If $x_1=x^*$, then $x_n=x^*$ for all $n$.


Tuesday, July 18, 2017

real analysis - Representing an integral as a finite sum



Question: If $a$ is in an arbitrary commensurable ratio to $\pi$, that is $a=\tfrac {m\pi}n$, then if $m+n$ is odd$$\int\limits_0^1\mathrm dy\,\frac {y^x\sin a}{1+2y\cos a+y^2}=\frac 12\sum\limits_{i=1}^{n-1}(-1)^{i-1}\sin ia\left[\frac {\mathrm d\Gamma\left(\frac {x+n+i}{2n}\right)}{\mathrm dx}-\frac {\mathrm d\Gamma\left(\frac {x+i}{2n}\right)}{\mathrm dx}\right]$$and when $m+n$ is even$$\int\limits_0^1\mathrm dy\,\frac {y^x\sin a}{1+2y\cos a+y^2}=\frac 12\sum\limits_{i=1}^{(n-1)/2}(-1)^{i-1}\sin ia\left[\frac {\mathrm d\Gamma\left(\frac {x+n+i}n\right)}{\mathrm dx}-\frac {\mathrm d\Gamma\left(\frac {x+i}n\right)}{\mathrm dx}\right]$$



I’m just having difficulty finding out where to start. Since the integral equals an infinite sum, it might be wise to start off with a taylor expansion of some sort. However, which function to expand I’m not very sure.


If you guys have any idea, I would be happy to hear them. Thanks!

elementary number theory - Let $a in Bbb Z$ such that $gcd(9a^{25}+10:280)=35$. Find the remainder of $a$ when divided by 70.



I'm stuck with this problem from my algebra class. We've recently been introduced to Fermat's little theorem and the Chinese Remainder Theorem.




Let $a \in \Bbb Z$ such that $gcd(9a^{25}+10:280)=35$. Find the remainder of $a$ when divided by 70.





So far I've tried to solve the congruence equation $9a^{25} \equiv -10 \pmod {35}$. The result for (using inverses and Fermat's theorem) is $a \equiv 30 \pmod {35}$



If this is ok, what should I do next? Thanks!


Answer



Yes, $a\equiv 30\pmod{35}$. Since you have a proof of this, I will not write one out. Now you are nearly finished. For note also that $a$ is odd.
This is because if $a$ were even, the gcd of $9a^{25}+10$ and $280$ would be even.



Since $a\equiv 30\pmod{35}$ and $a$ is even, it follows that $a\equiv 65\pmod{70}$.


Highest Voted Linked Questions

How to prove that $\left(\sum\limits_{k=1}^{n}a_{k}\right)^2\ge\sum\limits_{k=1}^{n}a_k^3$?


Let $$a_{n}\ge a_{n-1}\ge\cdots\ge a_{0}= 0,$$ and for any $i,j\in\{0,1,2\dots,n\},j>i$, there is $$a_{j}-a_{i}\le j-i.$$ Prove that $$\left(\sum_{k=1}^n a_k \right)^2\ge\sum_{k=1}^n a_k^3.$$ My ...





summation - Trouble understanding $sum$ notation



Question:




What does $$\left(\sum_{a, b, c}a\right)^2$$ mean ?




The answer given is $(a+b+c)^2$. However, I am having trouble understanding this.




I have seen this, this and this. But none helped.



Please help me understand this.



EDIT:



Also, what would be a better way to write:



$$\sum_{a,b,c}(b-c)(b+c)$$




Thanks.


Answer



The notation is very bad, since the summation index is apparently named $a$, while $a$ is also one of the values it takes. Better notations for this would be
$$
\left(\sum_{k=a,b,c} k\right)^2\quad\text{or}\quad\left(\sum_{k\in\{a,b,c\}} k\right)^2.
$$
Here the summation index is explicitly labeled $k$ and it takes values $a$, $b$ and $c$, so the sum is $a+b+c$.



The sum in the edit is even worse, I can't tell you what it means. I'd say it is just wrong. Summations should come with a summation index, always.







In the comments you mentioned that they call these summations cyclic expression, which does indeed help at guessing what it's supposed to be. Consider the summand $(b-c)(b+c)$ as an expression in $a,b,c$ and then look at the same term for the cyclic permutations $b,c,a$ and $c,a,b$. So I guess they want
$$
\sum_{a,b,c}(b-c)(b+c) = (b-c)(b+c) + (c-a)(c+a) + (a-b)(a+b).
$$


Monday, July 17, 2017

integration - prove that $int_{-infty}^{infty} frac{x^4}{1+x^8} dx= frac{pi}{sqrt 2} sin frac{pi}{8}$



prove that $$\int_{-\infty}^{\infty} \frac{x^4}{1+x^8} dx= \frac{\pi}{\sqrt 2} \sin \frac{\pi}{8}$$



My attempt: C is semicircle in upper half complex plane


Simple poles = $e^{i\frac{\pi}{8}}, e^{i\frac{3\pi}{8}},e^{i\frac{5\pi}{8}},e^{i\frac{7\pi}{8}}$ lie in upper semi-circle C and real axis


Given integral value $= 2\pi i \cdot (\text{sum of residues}) = 2 \pi i \left(\frac{-1}{8}\right) \left[e^{i\frac{5\pi}{8}}+e^{i\frac{15\pi}{8}}+e^{i\frac{25\pi}{8}}+e^{i\frac{35\pi}{8}}\right] = 0.27059 \pi$


This is numerically equal to $\frac{\pi}{\sqrt 2} \sin \frac{\pi}{8}$. But without using calculator, how to get this expression.


Answer



Just extending what you've got so far. Let's note $z=e^{i\frac{5\pi}{8}}$ and recall that $\cos{z}=\frac{e^{iz}+e^{-iz}}{2}$, then: $$2 \pi i \left(\frac{-1}{8}\right) \left[e^{i\frac{5\pi}{8}}+e^{i\frac{15\pi}{8}}+e^{i\frac{25\pi}{8}}+e^{i\frac{35\pi}{8}}\right]= 2 \pi i \left(\frac{-1}{8}\right) \left[z+z^3+z^5+z^7\right]=\\ 2 \pi i \left(\frac{-1}{8}\right) z \left[1+z^2+z^4+z^6\right] = 2 \pi i \left(\frac{-1}{8}\right) z \left[1+z^2+z^4(1+z^2)\right]=\\ 2 \pi i \left(\frac{-1}{8}\right) z (1+z^2)(1+z^4)= 2 \pi i \left(\frac{-1}{8}\right) z^4 (z^{-1}+z)(z^{-2}+z^2)=\\ 2 \pi i \left(\frac{-1}{8}\right) e^{i\frac{5\pi}{2}} 2\cos\left(\frac{5\pi}{8}\right)2 \cos\left(\frac{5\pi}{4}\right)=\pi i(-1)i \cos\left(\frac{\pi}{2}+\frac{\pi}{8}\right)\left(-\frac{1}{\sqrt{2}}\right)=\\ \frac{\pi}{\sqrt{2}}\sin\left(\frac{\pi}{8}\right)$$



real analysis - $f$ is linear and continuous at a point $implies f$ should be $f(x) =ax, $ for some $a in mathbb R$

Let $f$ be a real valued function defined on $\mathbb R$ such that $f(x+y)=f(x)+f(y)$.
Suppose there exists at least an element $x_0 \in \mathbb R$ such that $f$ is continuous at $x.$ Then prove that $f(x)=ax,\ \text{for some}\ x \in \mathbb R.$




Hints will be appreciated.

Sunday, July 16, 2017

probability theory - Possibly broken definition of the strong Markov property

Let





  • $I\subseteq [0,\infty)$ be closed under addition and $0\in E$

  • $E$ be a Polish space and $\mathcal E$ be the Borel $\sigma$-algebra on $E$

  • $X=(X_t)_{t\in I}$ be a Markov process with values in $(E,\mathcal E)$ and distributions $(\operatorname P_x)_{x\in E}$

  • $\mathbb F=(\mathcal F_t)_{t\in I}$ be the filtration generated by $X$



$X$ is said to have the strong Markov property $:\Leftrightarrow$ For all almost surely finite $\mathbb F$-stopping times $\tau$, $x\in E$ and bounded, $\mathcal E^{\otimes I}$-measurable $f:E^I\to\mathbb R$, $$\operatorname E_x\left[f\left(\left(X_{\tau+t}\right)_{t\in I}\right)\mid\mathcal F_\tau\right]=\operatorname E_{X_\tau}\left[f\left(X\right)\right]\;\;\;\operatorname P_x\text{-almost surely}\;,\tag 1$$ where $\mathcal F_\tau:=\left\{A\in\mathcal A:A\cap\left\{\tau\le t\right\}\in\mathcal F_t\;\text{for all }t\in I\right\}$.



I'm curious about two things:





  1. What's the reason to force $\tau$ to be almost surely finite? What's meant by almost surely at all (with respect to which probability measure?), in this context?

  2. Unless $\tau$ is $\operatorname P_x$-almost surely finite, the integrand on the left and the expression on the right side of $(1)$ seem to undefined on $\left\{\tau=\infty\right\}$



So, is the given definition broken? If that's the case: What do we need to change to fix it?

Saturday, July 15, 2017

trigonometry - Euler formula and $sin^3$

Using the formula:



$$e^{i\omega t} = \cos {\omega t} + i\sin{\omega t}$$



I would like to prove that:



$$\sin^3\;x = -\frac{\sin{3x} - 3\sin{x}}{4} $$



However I haven't found any approach to this question. Just converting the first formula to $\sin^3$ doesn't seem to help as I'm still getting $\cos^3$ on the other side. Can anyone help me to guide me on the right way?

Friday, July 14, 2017

geometry - Problem with the Pythagorean theorem




The Pythagorean theorem has already been proved and it is a basic fact of math. It always works, and there are proofs of it. But I have found a problem.



Say you want to get from point A to point B.



an image



Here is a way to do it, where red is vertical movement and grey is horizontal movement.




another image



Now say you split the path up like this. Note that it is the same length, as you can see from the color of the lines:



another image again



You can continue to do this... (note that the path still continues to stay the same length):



yet another image




And if you continue forever, the path will become diagonal.



yet another image again



But now there's a problem. This is contradicting the Pythagorean theorem:



so many images!



I know the Pythagorean theorem is true and proven, so what is wrong with this series of steps that I went through?



Answer



By splitting the path you have essentially created lots of little triangles. You still need to apply Pythagoras' theorem to each one. If you do, then you will get the correct answer.


sequences and series - Is there any geometry behind the Basel problem?



I could find many beautiful and rigorous proofs for Euler's solution to the Basel problem here Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$ (Basel problem)



Basel problem solution




But I am curious to know whether there are proofs by using geometry.



If anyone has proofs by geometry, please do share it with us.


Answer



Funny you should ask this today. A great video video by the YouTuber 3Blue1Brown was just posted today. (Aside: I recommend all his videos.)



The proof is based on the result mentioned by "3 revs" in the MO thread mentioned by user296602 above.


Thursday, July 13, 2017

real analysis - Proving Uniform Continuity

I'd like to prove that If $f$ is continuous on $[a, \infty)$, and $\lim_{x \to \infty} f(x)< \infty$, then $f$ is uniformly continuous on $[a, \infty)$.


My book contains a lot of theorems that have to do with proving uniform continuity, but all of them require the set to be closed and bounded. Any help would be appreciated!

number theory - Prove that : $2^{2^{n}}+1mid 2^{x_{n}}-2$ with $n=1,2,3...$

Question :



Let $n>0$ a natural number Use the following inequality $2^{n}≥n+1$ to prove that :



$2^{2^{n}}+1\mid 2^{x_{n}}-2$ where :



$x_{n}=2^{2^{n}}+1$




My attempt :



I think use induction :



$n=1$ then $x_{n}=5$ so $30\mid 5$ correct



Now for $n+1$ we will prove that :



$x_{n+1}\mid 2^{x_{n+1}}-2$.




I don't know how prove it using $2^{n}≥n+1$.



If any one know other method please drop here



Thanks!

calculus - Universal Chord Theorem




Let $f \in C[0,1]$ and $f(0)=f(1)$.



How do we prove $\exists a \in [0,1/2]$ such that $f(a)=f(a+1/2)$?



In fact, for every positive integer $n$, there is some $a$, such that $f(a) = f(a+\frac{1}{n})$.



For any other non-zero real $r$ (i.e not of the form $\frac{1}{n}$), there is a continuous function $f \in C[0,1]$, such that $f(0) = f(1)$ and $f(a) \neq f(a+r)$ for any $a$.



This is called the Universal Chord Theorem and is due to Paul Levy.




Note: the accepted answer answers only the first question, so please read the other answers too, and also this answer by Arturo to a different question: https://math.stackexchange.com/a/113471/1102






This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.


Answer



You want to use the intermediate value theorem, but not applied to $f$ directly. Rather, let $g(x)=f(x)-f(x+1/2)$ for $x\in[0,1/2]$. You want to show that $g(a)=0$ for some $a$. But $g(0)=f(0)-f(1/2)=f(1)-f(1/2)=-(f(1/2)-f(1))=-g(1/2)$. This gives us the result: $g$ is continuous and changes sign, so it must have a zero.


real analysis - The $ l^{infty} $-norm is equal to the limit of the $ l^{p} $-norms.




If we are in a sequence space, then the $ l^{p} $-norm of the sequence $ \mathbf{x} = (x_{i})_{i \in \mathbb{N}} $ is $ \displaystyle \left( \sum_{i=1}^{\infty} |x_{i}|^{p} \right)^{1/p} $.



The $ l^{\infty} $-norm of $ \mathbf{x} $ is $ \displaystyle \sup_{i \in \mathbb{N}} |x_{i}| $.




Prove that the limit of the $ l^{p} $-norms is the $ l^{\infty} $-norm.



I saw an answer for $ L^{p} $-spaces, but I need one for $ l^{p} $-spaces. Besides, I didn’t really understand the $ L^{p} $-answer either.



Thanks for your help!


Answer



Let me state the result properly:





Let $x=(x_n)_{n \in \mathbb{N}} \in \ell^q$ for some $q \geq 1$. Then $$\|x\|_{\infty} = \lim_{p \to \infty} \|x\|_p. \tag{1}$$




Note that $(1)$ fails, in general, not hold if $x=(x_n)_{n \in \mathbb{N}} \notin \ell^q$ for all $q \geq 1$ (consider for instance $x_n := 1$ for all $n \in \mathbb{N}$.)



Proof of the result: Since $$|x_k| \leq \left(\sum_{j=1}^{\infty} |x_j|^p \right)^{\frac{1}{p}}=\|x\|_p$$ for all $k \in \mathbb{N}$, $p \geq 1$, we have $\|x\|_{\infty} \leq \|x\|_p$. Thus, in particular $$\|x\|_{\infty} \leq \liminf_{p \to \infty} \|x\|_p. \tag{1}$$



On the other hand, we know that $$\|x\|_p = \left( \sum_{j=1}^{\infty} |x_j|^{p-q} \cdot |x_j|^q \right)^{\frac{1}{p}} \leq \|x\|_{\infty}^{\frac{p-q}{p}} \cdot \left( \sum_{j=1}^{\infty} |x_j|^q \right)^{\frac{1}{p}} = \|x\|_{\infty}^{1-\frac{q}{p}} \cdot \|x\|_q^{\frac{q}{p}}$$ for all $q

$$ \limsup_{p \to \infty} \|x\|_p \leq \limsup_{p \to \infty} \left( \|x\|_{\infty}^{1-\frac{q}{p}} \cdot \|x\|_q^{\frac{q}{p}}\right) = \|x\|_{\infty} \cdot 1. \tag{2}$$




Hence, $$\limsup_{p \to \infty} \|x\|_p \leq \|x\|_{\infty} \leq \liminf_{p \to \infty} \|x\|_p.$$ This shows that $\lim_{p \to \infty} \|x\|_p$ exists and equals $\|x\|_{\infty}$.


Wednesday, July 12, 2017

algebra precalculus - To find the total distance between two points A and B with just speed of train and difference in distance between two trains?



Two trains approach each other at 25 km/hr and 30 km/hr respectively from two points A and B. Second Train travels 20 km more than first. What is the distance between A and B ?




My approach:



Since $Distance = Speed \cdot Time$
[I just added the time it took to cover the distance taken by the Second train($30$ km/hr) to cover $20$ km to the First train($25$ km/hr) to get distance as constant]



(First Train)



For $20$ km at $25$ km/hr, the time taken would be




time $= \frac{20}{25} = \frac45$




So First train taken would take $4/5$ time more to cover the distance $d$




$d = 25 \cdot (t + 4/5 )$ ---> (1)



$d = 30 \cdot t$ ---> (2)




Now since distance is constant and speed is inversely proportional to time,



Ratio of speeds $= \frac{25}{30} = \frac56$



Ratio of times $ = \frac{t+4/5}{t} = \frac{\frac{5t+4}{5}}{t} = \frac{5t+4}{5t}$



So



$\frac56= \frac{5t+4}{5t}$




Since it is inversely proportional



$5(5t+4)=6(5t)$



$25t+20=30t$



$25t-30t=-20$



$-5t=-20$




$t= 4$



So applying $t=4$ in (2)
$d=30 \cdot 4 = 120$ km but its wrong



The correct answer is $220$ km



I don't understand! help


Answer



I think best way to describe all the data is in a table like this




$\begin{array}{cccc}
& \text{V} & \text{T} & \text{D}\\
\text{Train 1} & 25\,\left[\frac{km}{h}\right]\\
\text{Train 2} & 30\,\left[\frac{km}{h}\right]
\end{array}$



Denote the distance the first train traveled until the meeting
by $x$. Therefore the second train traveled $x+20$. Lets add this
to the table:




$\begin{array}{cccc}
& \text{V} & \text{T} & \text{D}\\
\text{Train 1} & 25\,\left[\frac{km}{h}\right] & & x\,[km]\\
\text{Train 2} & 30\,\left[\frac{km}{h}\right] & & x+20\,[km]
\end{array}$



Now we can complete our table using that $T=\frac{D}{V}$



$\begin{array}{cccc}

& \text{V} & \text{T} & \text{D}\\
\text{Train 1} & 25\,\left[\frac{km}{h}\right] & \frac{25}{x}\,\left[h\right] & x\,[km]\\
\text{Train 2} & 30\,\left[\frac{km}{h}\right] & \frac{30}{x+20}\,\left[h\right] & x+20\,[km]
\end{array}$



Assuming both trains left both points at the same time we get that
$T_{1}=T_{2}$ where $T_{1},T_{2}$ is the time travel for each train
until the meeting. So
$$
\frac{25}{x}=\frac{30}{x+20}\qquad\Rightarrow\qquad x=100\,\left[km\right]

$$
Therefore the distance between $A$ and $B$ is $D_{1}+D_{2}$ where $D_{1},D_{2}$
is the distance each train traveled untill the meeting. So the distance
is
$$
\left(100\right)+\left(100+20\right)=220\,\left[km\right]
$$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...