Monday, February 29, 2016

calculus - Find limit of $limlimits_{x toinfty}{left({{(x!)^2}over{(2x)!}}right)}$



I'm practising solving some limits and, currently, I'm trying to solve $\lim\limits_{x\to\infty}{\left({{(x!)^2}\over{(2x)!}}\right)}$.



What I have done:




  • I have attempted to simplify the fraction until I've reached an easier one to solve, however, I'm currently stuck at the following:




$$
\lim_{x→\infty}{\left({{(x!)^2}\over{(2x)!}}\right)}=
\lim_{x→\infty}{\left({{(\prod_{i=1}^{x}i)^2}\over{\prod_{i=1}^{2x}i}}\right)}=
\lim_{x→\infty}{\left({
{
{\prod_{i=1}^{x}i}\cdot{\prod_{i=1}^{x}i}
}\over{
{
{\prod_{i=1}^{x}}i}\cdot{\prod_{i=x+1}^{2x}i}

}
}\right)}=
\lim_{x→\infty}{\left({
{\prod_{i=1}^{x}i}\over{
{\prod_{i=x+1}^{2x}i}}
}\right)}.
$$




  • Instinctively, I can see that the limit is equal to $0$, since the numerator is always less than the denominator, thus approaching infinity slower as $x→\infty$.




Question:




  • How can I continue solving the above limit w/o resorting to instinct to determine it equals $0$ ?

  • If the above solution can't go any further, is there a better way to approach this problem?


Answer



Continuing from what you have mentioned,

$$0 \le \lim_{x\to\infty}{\left({
{\prod_{i=1}^{x}i}\over{
{\prod_{i=x+1}^{2x}i}}
}\right)} = \lim_{x\to\infty}\prod_{i=1}^{x}\frac{i}{i+x} \le \lim_{x\to\infty}\prod_{i=1}^{x}\frac{x}{x+x} = \lim_{x\to\infty}\frac{1}{2^x}=0.$$


Sunday, February 28, 2016

real analysis - Provided $f$ is continuous at $x_0$, and $f(x+y) = f(x) + f(y)$ prove $f$ is continuous everywhere.

My attempt...






By definition, whenever $|x- x_0| < \delta$ we have $|f(x) - f(x_0)| < \epsilon$. Observing that



\begin{align}

|f(x) - f(y)| &= |f(x -x_0 + x_0) + f(y)| = |f(x-x_0) + f(x_0) - f(y)| \newline \newline
&\leq \epsilon + |f(y) - f(x_0)| = \epsilon + |f(y-x_0)|...
\end{align}






Here I need to choose a delta that can depends on $\epsilon$ and $y$ s.t. whenever $0<|x-y|< \delta$ then the above inequality is bounded by any $\epsilon$.



I'm also, in general, having trouble understanding this concept of continuity on an interval. I believe the structure of the definition is: for any $\epsilon> 0$ and any number $y$ in the interval, there exists a $\delta$ that depends on $\epsilon$ and $y$ such that for all $x$ in the interval and $|x - y | < \delta$ then $|f(x) - f(y)| < \epsilon$.




This definition makes me tempted to just choose y to be in the same delta neighborhood as $x$ in the given statement, but that constricts continuity to a small interval.






Edit: This question assumes no knowledge of Lebesgue measure theory.

trigonometry - Sine series: angle multipliers add to 1


It is known that in an sine series with angles in arithmetic progression (I refer to this question):



$\sum_{k=0}^{n-1}\sin (a+k \cdot d)=\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \sin\biggl( \frac{2 a + (n-1) \cdot d}{2}\biggr)$


What if $k$ does not go from $0$ to $n-1$, but its elements are strictly positive rational numbers,


with $0and $\sum_{i=1}^{n} k_i=1$
and $k$ is monotonically increasing


Is there a way to simplify:


$\sum_{i=1}^{n} \sin (a+k_i \cdot d)$


in a way analogous to the first formula? Maybe using the property that k adds to $1$ and using $\sin(\pi/2)=1$ ??


e.g.:
$k_i = (0.067,0.133,0.200,0.267,0.333)$ (5 increasing elements between $0$ & $1$ which sum to 1)
$a=90$
$d=40$


Sum = $(90+0.067\cdot40)+(90+0.133\cdot40)+(90+0.200\cdot40)+(90+0.267\cdot40)+(90+0.333\cdot40)$


Answer



There is no way to give this a general form. The definition of $k_i$ is too general. I have been trying to come up with a function that would give $k_i$ while also fitting your terms, but I am having difficulty. This would need to have specific values of $k_i$ and each term of the sum would need to be calculated individually. I did try to change the sum into something else (work below), but it seems that this is only more complicated. $$\sum_{i=1}^n \sin (a + k_i d)$$ We can separate the sum using the Trigonometric Addition Formula: $$\sin (\alpha + \beta) = \sin\alpha \cos\beta + \sin\beta \cos\alpha$$ $$\sum_{i=1}^n [\sin (a) \cos (k_i d) + \sin (k_i d) \cos (a)]$$ $$= \sin a \sum_{i=1}^n [\cos(k_i d)] + \cos a \sum_{i=1}^n[\sin(k_i d)]$$ Past this point, there is no general form. You can attempt to use the Multiple-Angle Formulas, but this only seems to complicate things. In conclusion, I believe your best option would be to just use the original sum and calculate it normally. Unless you have a more rigorous definition for $k_i$, there is no general form that will apply.



calculus - Spot mistake in finding $lim limits_{xto1}left(frac x {x-1} - frac1 {log(x)} right)$



This is the limit I'm trying to solve: $\lim \limits_{x\to1}\left(\frac
x {x-1} - \frac1 {\log(x)}
\right)$




I thought: let's define $x=k+1$, so that $k\to0$ as $x\to1$.



Then it becomes:
$$\lim \limits_{k\to0}\left(\frac
{k+1} {k} - \frac1 {\log(k+1)}
\right)$$
and then,
$$\lim \limits_{k\to0}\left(\frac
{k+1} {k} - \frac1 {\frac {\log(k+1)\times k}k}

\right)=\lim \limits_{k\to0}\left(\frac
{k+1} {k} - \frac1 {k}
\right).$$
Which results in $\frac k k$ , that should be 1, but wolfram says it's $\frac 1 2$...



Did I do something illegal?


Answer



Yes, the illegal part is this step:
$$\lim_{k \to 0}\frac1{\frac{\log(k + 1)}k}\frac1k = \lim_{k \to 0}\frac1k$$




I see that you applied the known limit
$$\lim_{t \to 0}\frac{\log(1 + t)}t = 1$$
but the fact is that
$$\lim_{x \to \alpha}f(x)g(x) = \lim_{x \to \alpha}f(x)\times\lim_{x \to \alpha}g(x)$$
is only valid when both limit are finite. In your case you're left with $\lim\limits_{k \to 0}\frac1k$, which not only is not finite, but does not exist entirely.






If you are looking for a way to evaluate your limit, I'd suggest MacLauring (that is, a Taylor expansion around $x =0$), which is the simplest and most elegant way. But since you said that you can't use Taylor yet, I fear your only possibility is going with L'Hospital.


calculus - Convergence of $sum_{n=1}^{infty} logleft(frac{(2n)^2}{(2n+1)(2n-1)}right)$


I have to show that the series $\sum_{n=1}^{\infty} \log\left(\frac{(2n)^2}{(2n+1)(2n-1)}\right)$ converges.


I have tried Ratio Test and Cauchy Condensation Test but it didn't work for me. I tried using Comparison Test but I couldn't make an appropriate inequality for it. Could you please give me some hints. Any help will be appreciated.



Answer



$0<\log (1+x)0 . $ Therefore $0<\log (\frac{4 n^2}{4 n^2-1})=\log (1+\frac{1}{4 n^2-1})< \frac{1}{4 n^2-1} < \frac{1}{2 n^2}. $ The sum $\sum (1/2 n^2)$ converges by Cauchy Condensation. Your sum therefore converges by Comparison.


(Where did that term $\frac{1}{2 n^2}$ come from in the inequality? From the idea, with $4 n^2=x$, that $\frac{1}{x-1}<\frac{2}{x}$ if $x$ is big enough.)


Saturday, February 27, 2016

algebra precalculus - Proof for formula for sum of sequence $1+2+3+ldots+n$?



Apparently $1+2+3+4+\ldots+n = \dfrac{n\times(n+1)}2$.



How? What's the proof? Or maybe it is self apparent just looking at the above?



PS: This problem is known as "The sum of the first $n$ positive integers".


Answer




Let $$S = 1 + 2 + \ldots + (n-1) + n.$$ Write it backwards: $$S = n + (n-1) + \ldots + 2 + 1.$$
Add the two equations, term by term; each term is $n+1,$ so
$$2S = (n+1) + (n+1) + \ldots + (n+1) = n(n+1).$$
Divide by 2: $$S = \frac{n(n+1)}{2}.$$


linear algebra - Let $A,B$ be $m times n$ and $n times m$ matrices, respectively. Prove that if $m > n$, $AB$ is not invertible


We haven't done anything about rank or dimensions or linear dependence / basis or determinants. Possible related facts :



  1. A matrix is invertible iff it is bijective as a linear transformation.




  2. An invertible matrix is row-equivalent to the identity matrix.





  3. A matrix has a right inverse iff it has a left inverse.



Also, invertability is only defined for square matrices.


Answer



Since $A$ is an $m\times n$ matrix and $B$ is an $n\times m$ matrix, the product $AB$ is an $m\times m$ matrix. Suppose $m>n$ then the operator associated to left multiplication by $B$ is not injective because there are more columns than rows, so the kernel is always nontrivial (i.e. there are more column vectors than there are entries in those column vectors, so they must be linearly dependent). Said another way: the linear operator $T:\Bbb R^m\to\Bbb R^n$ with $T(v)=Bv$ is not injective because $m>n$, so the domain has higher dimension than the codomain. So there exists vectors $v,w\in\Bbb R^m$ such that $Bv=Bw$ but $v\neq w$.


Spoiler:



Thus, $$Bv=Bw\implies A(Bv)=A(Bw)\implies (AB)v=(AB)w$$ but $v\neq w$. Hence, the operator associated with left multiplication by $AB$ is not injective.



How does analytic continuation lets us extend functions to the complex plane?



I'm trying to understand analytic continuation and I noticed on wolfram that it





allows the natural extension of the definition trigonometric,
exponential, logarithmic, power, and hyperbolic functions from the
real line $\mathbb{R}$ to the entire complex plane $\mathbb{C}$




So how does it extend, say, $f(x) = \sin(x)$, $x \in \mathbb{R}$ to the complex plane? What are the steps that have to be taken to extend this function (and others) to the complex plane?


Answer



More interesting is the case that Riemann worked out:




$$\Gamma(s)\zeta(s)=\int_0^\infty\frac{x^s}{e^x-1}\frac{dx}{x},$$
where $\Gamma(s)$ is the gamma function, $\zeta(s)$ is the zeta function and $s\in\mathbb{C}.$ To extend this formula to $\mathbb{C},$ Riemann considers the path integral, on the complex plane,
$$\oint_C\frac{(-z)^s}{e^z-1}\frac{dz}{z},$$
where the path $C$ goes from $\infty$ to the origin $O$, above the positive real axis, and circulating $O$, counterclockwise through a circumference of radius $\delta$, say, returning to $\infty$ along the bottom of the positive real axis. The important thing, for the evaluation of above integral, is that we may split it into three integrals, namely



$$\biggl(\int_\infty^\delta+\int_{|z|=\delta}+\int_\delta^\infty\biggr)\frac{(-z)^s}{e^z-1}\frac{dz}{z},$$
recalling that $(-z)^s=e^{s\log(-z)}$, and $\log(-z)=\log|z|+ i\, \text{arg}(-z).$ So that, in the first integral, when $-z$ lies on the negative real axis, we take $\text{arg}(-z)=-\pi\,i;$ on the second one $-z=-\delta,$ and as $-z$ progress counterclockwise about $O$, $\text{arg}(-z)$ goes from $-\pi\,i$ to $\pi\,i.$ Finally, in the last integral $\text{arg}(-z)=\pi\,i,$ therefore the first and third integrals do not cancel. The second integral cancels as $\delta\to 0.$ The rest is purely technical and leads to the analytical continuation of the $\zeta(s)$ function of Riemann everywhere except 1 where it has a simple pole with residue 1.


discrete mathematics - Using Direct Proof. $1+2+3+ldots+n = frac{n(n + 1)}{2}$





I need help proving this statement. Any help would be great!


Answer



Here is an approach.



$$ s_n =1+2+3+\dots+(n-1)+n \\
s_n =n+(n-1)+(n-2)+\dots+1 . $$



Adding the above gives



$$2s_n = (1+n)+(2+(n-1))+(3+(n-2))+\dots+(1+n) $$




$$ =(1+n)+(1+n)+\dots+(1+n) $$



The above is nothing but adding $(1+n)$ n times and the result follows



$$ \implies s_n = \frac{n(n+1)}{2}. $$


summation - Why is $sum_{j = 1}^{n} dfrac{1}{n} = 1$




I encountered $\sum_{j = 1}^{n} \dfrac{1}{n} = 1$ in my textbook, but I do not understand why this summation $= 1$. My textbook provides no reasoning as to why this is the case.



My understanding is that, since there is nothing in $\dfrac{1}{n}$ that depends on $j$, it seems that we are just summing $\dfrac{1}{n}$ to itself up to $n$. However, I'm not sure how to interpret this and how it equals a value of $1$.



I apologise if there is already a question on this, but my searches have encountered nothing that addresses this specific summation. If I am mistaken, I would appreciate it if someone could please redirect me.



I would greatly appreciate it if people could please take the time to explain the reasoning behind this.


Answer



Note that
$$

\sum_{j=1}^n \frac 1n = \overbrace{\frac 1n + \frac 1n + \cdots + \frac 1n}^{n\text{ times}} = n \cdot \frac 1n = 1
$$


Extended Euclidean algorithm with negative numbers



Does the Extended Euclidean Algorithm work for negative numbers?
If I strip out the sign perhaps it will return the correct GCD, but what exactly to do if I also want $ax + by = GCD(|a|,|b|)$ to be correct (I guess it won't hold anymore as soon as I've stripped out the signs of $a$ and $b$)?



== UPDATE ==




It couldn't be that simple.



If $a$ was negative, then after stripping out signs EEA returns such $x$ and $y$ that $(-a)*x + b*y = GCD(|a|,|b|)$. In this case $a*(-x) + b*y = GCD(|a|,|b|)$ also holds.



The same for $b$.



If both $a$ and $b$ were negative, then $(-a)*x + (-b)*y = GCD(|a|,|b|)$ holds and, well $a*(-x) + b*(-y) = GCD(|a|,|b|)$ ought to hold.



Am I right?
Should I just negate $x$ if I have negated $a$ and negate $y$ if I have negated $b$?



Answer



Well, if you strip the sign of $a$ and $b$, and instead run the Euclidean algorithm for $|a|$ and $|b|$, then if your result is $|a|x+|b|y=1$, you can still get a solution of what you want because
$$a(\text{sign}(a)\cdot x)+b(\text{sign}(b)\cdot y)=1.$$


modular arithmetic - Solving cubic congruence


I wasn't actually taught about cubic congruences equation and was managing with quadratic congruences until I was hit with this: $$x^3 \equiv 53 (\text{ mod } 120)$$ Effort: I tried deconstructing it into a system of congruence equation assuming that there is several congruences towards prime decomposition of $120 = 3\cdot 5\cdot 8$ but ended up using Chinese remainder theorem to solve it into $53$ (which was quite silly).


I also tried putting it into $x^3-53 \equiv 0 (\text{ mod } 120)$ and use Special Algebra Expansions $a^3-b^3$ or $(a-b)^3$ but got stuck. A few useful hints would be appreciated. Thank you.


Answer



Using CRT:


  • $x^3\equiv_3 53\equiv_3 2\Rightarrow x\equiv_3 2$

  • $x^3\equiv_5 53\equiv_5 3\Rightarrow x\equiv_5 2$

  • $x^3\equiv_8 53\equiv_8 5\Rightarrow x\equiv_8 5$

This implies that $x\equiv_{15}2$ and $x\equiv_8 5$, which is the same as saying $x-2\equiv_{15}0$ and $x-2\equiv_8 3$. The first number $8k+3$ that is divisible by $15$ is $75$, so $$x\equiv_{120}77$$


Friday, February 26, 2016

elementary set theory - Such thing as inverse/undo "search" or "filter" operator


For common operators like $+$ the inverse is $-$, and $\times$ it is $\div$. Wondering if a "search" or "filter" on a set can have an inverse.


$$B = \{a \in \mathbb{Z} \mid a < 10 \land a > 5\}$$


That's a super simple search but it demonstrates the point. The inverse of a search is something like a "non-search" perhaps, but that doesn't quite make sense. A "forget" maybe. The inverse doesn't seem like search for everything "except", such as:


$$B^{-1} = \{a \in \mathbb{Z} \mid a \geq 10 \lor a \leq 5\}$$


If you are trying to "undo" a search, it seems like you just don't want to perform any search. Wondering what your thoughts are on this type of operation. The operation might look like:


$$\mathbb{Z} \circ \{a \in \mathbb{Z} \mid a < 10 \land a > 5\} = \{6, 7, 8, 9\}$$



I don't know what the inverse would look like, maybe:


$$\{a \in \mathbb{Z} \mid a < 10 \land a > 5\} \circ \{a \in \mathbb{Z} \mid a < 10 \land a > 5\}^{-1} = \mathbb{Z}$$


Or perhaps in this case, there just isn't an inverse :/. If not, wondering why certain things can't have an inverse.


Answer



Your concept of "search or "filter" is somewhat like intersection of sets. That is, if $\;A\;$ is a search space and $\;B\;$ represents a condition or criterion, then $\;C:=A\cap B\;$ represents the "search" or "filter" result. However, this clearly does not have an inverse. That is you can't recover $\;A\;$ from $\;C,\;$ the intersection even if you know $\;B\;.$ It would be a lot easier to "keep a backup" of $\;A\;$ instead.


elementary set theory - How to Prove $mathbb Rtimes mathbb R sim mathbb R$?

How to prove $\mathbb R\times \mathbb R \sim \mathbb R$?




I know you have to split the problem up into two claims, for each direction to prove that it is a bijection, but i don't know how to go much further than that...

Thursday, February 25, 2016

real analysis - If $(f_n)to f$ uniformly and $f_n$ is uniformly continuous for all $n$ then $f$ is uniformly continuous




Show if is true or false: if $(f_n)$ converges uniformly to $f$, and $f_n$ is uniformly continuous for all $n$ then $f$ is uniformly continuous



I think is true. My attempt to prove it: if $(f_n)\to f$ uniformly then we can write


$$(\forall\varepsilon>0)(\exists N\in\Bbb N)(\forall x\in\mathcal D):|f_n(x)-f(x)|<\varepsilon,\quad\forall n>N\tag{1}$$


and cause all $f_n$ are uniformly continuous


$$(\forall\varepsilon>0)(\exists\delta>0)(\forall x,y\in\mathcal D):|x-y|<\delta\implies|f_n(x)-f_n(y)|<\varepsilon,\quad\forall n\in\Bbb N\tag{2}$$


and I want to prove that both conditions implies


$$(\forall\varepsilon>0)(\exists\delta>0)(\forall x,y\in\mathcal D):|x-y|<\delta\implies|f(x)-f(y)|<\varepsilon\tag{3}$$


where $\mathcal D$ is the domain of all of them (cause I have the previous knowledge that uniform convergence of continuous functions implies that the limit function is continuous).



Then from $(3)$ I can write


$$|f(x)-f(y)|=|f(x)-f_m(x)+f_m(x)-f(y)|\le |f(x)-f_m(x)|+|f_m(x)-f(y)|$$


Then I will use some $m$ that holds $(1)$ for some $\frac{\varepsilon}{3}$. And from $(2)$ I will use the $\delta$ that holds for the same $\frac{\varepsilon}{3}$. If $|f(y)-f_m(y)|<\frac{\varepsilon}{3}$ then $f(y)

$$\begin{align}|f(x)-f(y)|&\le|f(x)-f_m(x)|+|f_m(x)-f(y)|\\&<\frac{\varepsilon}{3}+|f_m(x)-f_m(y)-\frac{\varepsilon}{3}|\\&<\frac{\varepsilon}{3}+|f_m(x)-f_m(y)|+\frac{\varepsilon}{3}\\&<\frac{\varepsilon}{3}+\frac{\varepsilon}{3}+\frac{\varepsilon}{3}=\varepsilon\end{align}$$


then it proves that exists a $\delta$ such that $|f(x)-f(y)|<\varepsilon$ for some $\varepsilon$ in the required conditions. Now, can you check my proof, telling me if it is right or if it lacks something? Thank you in advance.


Answer



Choose a $\delta$ which works in equation $(2)$ for $\epsilon_0=\frac{\epsilon}{3}>0$. Then, by $(1)$ (uniform convergence) we have an $N$ such that $|f_n(x)-f(x)|<\varepsilon$ and $|f_n(y)-f(y)|<\varepsilon$ holds $\forall m>N$. Now we apply the manipulation in clark's comment to obtain:


$|f(x)-f(y)|\leq |f(x)-f_m(x)|+|f_m(x)-f_m(y)|+|f_m(y)-f(y)|$


From here, we have


$|f(x)-f_m(x)|<\epsilon_0=\dfrac{\epsilon}{3}$ by uniform converge (at $x$)



$|f_m(x)-f_m(y)|<\epsilon_0=\dfrac{\epsilon}{3}$ by uniform continuity (of $f_m$)


$|f_m(y)-f(y)|<\epsilon_0=\dfrac{\epsilon}{3}$ by uniform converge (at $y$)


Thus we now know that


$|f(x)-f_m(x)|+|f_m(x)-f_m(y)|+|f_m(y)-f(y)|<\dfrac{\epsilon}{3}+\dfrac{\epsilon}{3}+\dfrac{\epsilon}{3}=\epsilon$


as required.


Note how carefully I selected my $\delta$, feel free to ask why I did things this way if any of what I did seems unnecessary to you.


calculus - Evaluating $int P(sin x, cos x) text{d}x$



Suppose $\displaystyle P(x,y)$ a polynomial in the variables $x,y$.



For example, $\displaystyle x^4$ or $\displaystyle x^3y^2 + 3xy + 1$.




Is there a general method which allows us to evaluate the indefinite integral




$$ \int P(\sin x, \cos x) \text{d} x$$




What about the case when $\displaystyle P(x,y)$ is a rational function (i.e. a ratio of two polynomials)?



Example of a rational function: $\displaystyle \frac{x^2y + y^3}{x+y}$.







This is being asked in an effort to cut down on duplicates, see here: Coping with *abstract* duplicate questions.



and here: List of Generalizations of Common Questions.


Answer



There are several general approaches to integrals that involve expressions with sines and cosines, polynomial expressions being the simplest ones.



Weierstrass Substitution




A method that always works is Weierstrass substitution, which will turn such an integral into an integral of rational functions, which in turn can always be solved, at least in principle, by the method of partial fractions. This works even for rational functions of sine and cosine, as well as functions that involve the other trigonometric functions.



Weierstrass substitution replaces sines and cosines (and by extension, tangents, cotangents, secants, and cosecants) by rational functions of a new variable. The identities begin by the trigonometric substitution $t = \tan\frac{x}{2}$, with $-\pi\lt x\lt \pi$, which yields
$$\begin{align*}
\sin x &= \frac{2t}{1+t^2}\\
\cos x &= \frac{1-t^2}{1+t^2}\\
dx &= \frac{2\,dt}{1+t^2}.
\end{align*}$$
For example, if we have

$$\int \frac{\sin x-\cos x}{\sin x+\cos x}\,dx$$
using the substitution above we obtain:
$$\begin{align*}
\int\frac{\sin x-\cos x}{\sin x+\cos x}\,dx &= \int\left(\frac{\quad\frac{2t}{1+t^2} - \frac{1-t^2}{1+t^2}\quad}{\frac{2t}{1+t^2} + \frac{1-t^2}{1+t^2}}\right)\left(\frac{2}{1+t^2}\right)\,dt\\
&= \int\left(\frac{\quad\frac{2t-1+t^2}{1+t^2}\quad}{\frac{1+2t-t^2}{1+t^2}}\right)
\left(\frac{2}{1+t^2}\right)\,dt\\
&= \int\left(\frac{2t-1+t^2}{2t+1-t^2}\right)\left(\frac{2}{1+t^2}\right)\,dt\\
&= 2\int\frac{2t-1+t^2}{(1+t^2)(2t+1-t^2)}\,dt
\end{align*}$$
which can then be integrated by the method of partial fractions.




Substitutions and Reduction formulas



However, there are usually faster methods, particularly for polynomial expressions. By breaking up the integral into a sum of integrals corresponding to the monomials, the problem reduces to solving integrals of the form
$$\int \left(\sin x\right)^n \left(\cos x\right)^m\,dx$$
with $n$ and $m$ nonnegative integers. The standard methods then are:




  1. If $n$ is odd, then "reserve" one sine, and transform the others into cosines by using the identity $\sin^2 x = 1-\cos^2x$. Then do the change of variable $u=\cos x$ to transform the integral into the integral of a polynomial. For example,
    $$\int \left(\sin x\right)^5\left(\cos x\right)^2\,dx,$$

    then take $(\sin x)^5$, and write it as
    $$\sin x(\sin x)^4 = \sin x(\sin^2x)^2 = \sin x(1-\cos^2 x)^2.$$
    Then setting $u=\cos x$ and $du = -\sin x\,dx$, we get
    $$\int\left(\sin x\right)^4\left(\cos x\right)^2\,dx = \int \sin x\left(1-\cos^2x\right)^2\left(\cos x\right)^2\,dx = -\int (1-u^2)^2u^2\,du,$$
    which can be solved easily.


  2. If $m$ is odd, then do the same trick by by "reserving" one cosine and using the substitution $u=\sin x$. For example,
    $$\int \sin^2x\cos^3x\,dx = \int \sin^2x(\cos^2x)\cos x\,dx = \int(\sin^2x)(1-\sin^2x)\cos x\,dx$$
    and then setting $u=\sin x$, $du = \cos x\,dx$, we get
    $$\int \sin^2x\cos^3x\,dx = \int u^2(1-u^2)\,du,$$
    which can be solved easily again.



  3. If $n$ and $m$ are both even, then either replace all the sines with cosines or vice versa, using $\sin^2 x = 1 - \cos^2x$ or $\cos^2x = 1-\sin^2 x$, and expand. This will leave integrals of the form
    $$\int \sin^n x\,dx\qquad\text{or}\quad \int \cos^m x\,dx$$
    with $n$ and $m$ even positive and even. In that situation, one can use the reduction formulas, which can be obtained by using integration by parts:
    $$\begin{align*}
    \int \sin^n x\,dx &= - \frac{1}{n}\sin^{n-1} x\cos x + \frac{n-1}{n}\int \sin^{n-2}x\,dx,\\
    \int \cos^m x\,dx &= \frac{1}{m}\cos^{m-1} x\sin x + \frac{n-1}{n}\int \cos^{n-2}x\,dx.
    \end{align*}$$
    By repeated application of these formulas, one eventually ends up with an integral of the form $\int \,dx$ which can be solved directly.





The process can be shortened if you happen to spot or know some trigonometric identities; for example, the power reduction formulas allow you to replace powers of sines or cosines by expressions of multiple angles, e.g.,
$$\sin^4\theta = \frac{3-4\cos(2\theta)+\cos(4\theta)}{8}$$
could replace a single integral with three integrals that can be done fairly easily via substitution.



Other methods



If you are comfortable enough integrating functions with a complex variable in it, then the method described by Qiaochu will transform any integral that involves sines and cosines in any way into an integral that involves exponential functions instead.


probability - Optimal strategy for rolling die consecutively without getting a "1"

Consider rolling a 6-sided die continuously and trying to tally as high a score as possible (the sum of all rolls). If you roll a 1, then your turn is over and your score is 0. So for each successful roll, the expectation value should be 4. According to Knizia (1999), an approximately optimal strategy would be to attempt to roll to a score of 20 and then stop. (S)he states:





"...we know the true odds of such a bet are 1 to 5. If you ask yourself how much you should risk, you need to know how much there is to gain.....If you put 20 points at stake, this brings the odds to 4 to 20, that is 1 to 5, and makes a fair game....Whenever your accumulated points are less than 20, you should continue throwing, because the odds are in your favor."




Don't understand this, is not rolling to 20 on average essentially rolling five times (without getting a 1)? The probability of this happening is only about 40%. So wouldn't the odds be about 60% that you would roll a 1 and score 0 if you always try to roll up to 20?

Wednesday, February 24, 2016

sequences and series - Show that, $sum_{n=1}^{infty}left[frac{beta(2n)}{n}-lnleft(frac{n+1}{n}right)right] =...$



valid for all $s\ge 1$
$$\beta(s)=\sum_{n=0}^{\infty}\frac{1}{(2n+1)^s}$$



The particular value of $\Gamma\left(\frac{1}{4}\right)=3.6256099...$



Euler's constant is defined by $$\lim_{n \to \infty}\left[H_n-\ln(n)\right]=\gamma$$



Euler also showed that $$\gamma=\sum_{n=1}^{\infty}\left(\frac{1}{n}-\ln\left(\frac{n+1}{n}\right)\right)$$




Where $H_n=\sum_{k=1}^{n}\frac{1}{k}$



Show that,



$$\sum_{n=1}^{\infty}\left[\frac{\beta(2n)}{n}-\ln\left(\frac{n+1}{n}\right)\right] =\gamma+\ln\left(\frac{16\pi^2}{\Gamma^4\left(\frac{1}{4}\right)}\right)$$



We are greatly appreciated if anyone can answer this identity.


Answer



With the same approach to a recent question of yours,




$$ \beta(2n)-1=\frac{1}{\Gamma(2n)}\int_{0}^{+\infty}\frac{x^{2n-1} e^{-3x}}{1+e^{-2x}}\,dx $$
hence:
$$ \sum_{n\geq 1}\frac{\beta(2n)-1}{n} = 2\int_{0}^{+\infty}\frac{e^{-3x}}{1+e^{-2x}}\cdot\frac{\cosh x-1}{x}\,dx.$$
By Frullani's theorem, or differentiation under the integral sign (Feynman's trick):
$$ 2\int_{0}^{+\infty}\frac{\cosh x-1}{x}e^{-(3+2m)x}\,dx =-\log(m+1)+2\log\left(m+\frac{3}{2}\right)-\log(m+2),$$
hence:
$$\sum_{n\geq 1}\frac{\beta(2n)-1}{n} = \sum_{m\geq 0}(-1)^m\left(-\log(m+1)+2\log\left(m+\frac{3}{2}\right)-\log(m+2)\right)$$
is the logarithm of an infinite product that can be computed from the limit product representation for the $\Gamma$ function. Namely:
$$ \sum_{m\geq 0}\left[\log\left(2m+\frac{3}{2}\right)-\frac{1}{2}\log(2m+1)-\log\left(2m+\frac{5}{2}\right)+\frac{1}{2}\log(2m+3)\right]\\=\frac{\log 2}{2}-\log\Gamma\left(\frac{3}{4}\right)+\log\Gamma\left(\frac{5}{4}\right).$$

Now it is enough to use the $\Gamma$ reflection formula and the identity $\Gamma(z+1)=z\,\Gamma(z)$ to express your sum in terms of $\gamma$ and $\Gamma\left(\frac{1}{4}\right)$. The original identity is, in fact, equivalent to:




$$ \prod_{m\geq 0}\frac{(4m+3)(4m+3)(4m+6)}{(4m+2)(4m+5)(4m+5)}=\color{red}{\frac{1}{16\pi^2}\,\Gamma\left(\frac{1}{4}\right)^4}.$$



functional equations - Real Analysis Proofs: Additive Functions

I'm new here and could really use some help please:


Let $f$ be an additive function. So for all $x,y \in \mathbb{R}$, $f(x+y) = f(x)+f(y)$.



  1. Prove that if there are $M>0$ and $a>0$ such that if $x \in [-a,a]$, then $|f(x)|\leq M$, then $f$ has a limit at every $x\in \mathbb{R}$ and $\lim_{t\rightarrow x} f(t) = f(x)$.




  2. Prove that if $f$ has a limit at each $x\in \mathbb{R}$, then there are $M>0$ and $a>0$ such that if $x\in [-a,a]$, then $|f(x)| \leq M$.



if necessary the proofs should involve the $\delta - \varepsilon$ definition of a limit.



The problem had two previous portions to it that I already know how to do. However, you can reference them to do the posted portions of the problem. Here they are:



(a) Show that for each positive integer $n$ and each real number $x$, $f(nx)=nf(x)$.


(b) Suppose $f$ is such that there are $M>0$ and $a>0$ such that if $x\in [−a,a]$, then $|f(x)|\le M$. Choose $\varepsilon > 0$. There is a positive integer $N$ such that $M/N < \varepsilon$. Show that if $|x-y|

Tuesday, February 23, 2016

elementary number theory - My proof of $m cdot 0 = 0 = 0 cdot m$ for all $m in mathbb{Z}$




I have the following proposition to prove:




For all $m \in\mathbb\ Z$, $m \cdot 0 = 0 = 0 \cdot m$




I can use the following axioms:





  1. commutativity

  2. associativity

  3. distributivity

  4. identity for addition ($0$)

  5. identity for multiplication ($1$)

  6. additive inverse

  7. cancellation: Let $m,n,p$ be integers. If $m \cdot n = m \cdot p$ and $m \ne 0$, then $n = p$.



Here is my proof:




\begin{align*}
m \cdot 0 &= m \cdot (m + (-m))\\
m \cdot 0 &= (m \cdot m) + (m \cdot (-m))\\
m \cdot 0 &= (m \cdot m) +(m \cdot -1 \cdot m) \\
m \cdot 0 &= (m \cdot m) +-1 \cdot (m \cdot m) \\
m \cdot 0 &= (m \cdot m) - (m \cdot m) \\
m \cdot 0 &= 0
\end{align*}




However, I am not sure, given a simple set of axioms, that this solution is correct. More specifically, is factoring $-m$ as $-1 \cdot m$ acceptable? Or is another proposition that I should prove beforehand?


Answer



Assume that m is an integer. By the commutative property we know that m.0 = 0.m.



Now, we only need to prove only that m.0 = 0. We use m = m, then



m.1 = m.1 because 1 is the identity under multiplication.



m.(1+0) = m.1 because 0 is the identity under addition.




Using the distributive property,



(m.1)+(m.0) = (m.1)



m +(m.0) = m



-m + m +(m.0) = -m + m (-m is the inverse of m under addition.)



(-m + m) +(m.0) = (-m + m), associative property.




0 +(m.0) = 0 because the definition of the identity under addition.



m.0 = 0 Q.E.D.


calculus - How to find $lim_{x to 0} ( frac{1}{sin(x)}- frac{1}{arcsin(x)})$

I want to do the problem without using L'Hopitals rule, I have
$$\frac{1}{\sin(x)}- \frac{1}{\arcsin(x)} = \frac{x}{\sin(x)}\frac{x}{\arcsin(x)}\frac{\sin(x)-\arcsin(x)}{x^2}$$

and I'm not quite sure about how to deal with the $\dfrac{\sin(x)-\arcsin(x)}{x^2}$, apparently its limit is $0$? In which case the whole limit would be $0$. But how would I show this without using l'Hopitals rule. Thanks for any help.

Monday, February 22, 2016

finite fields - Construction of addition and multiplication table for GF(4)

I am dealing with finite fields and somehow got stuck. The construction of a prime field $GF(p), p \in \mathbb{P}$ is pretty easy because every operation is modulo p. In other words $GF(p)$ contains all integers ranging from 0 to $p-1$.



However, non prime fields are a bit trickier. Given the power $q = p^n$ with $p \in \mathbb{P}$ and $n \in \mathbb{N}$, one has to find an irreducable polynom g(x) of degree n. Then the construction of $GF(p^n)$ is the following:




$GF(p^n) = \frac{GF(p)[x]}{g}$



Theoretically I get along with this definition. Unfortunately I fail to construct addition and multiplication tables for any GF(q). Though I can easily find the wanted table on the internet, I have not found an explication yet that really made me understand.



I would like to know how to create the addition and multiplication table for $GF(2^2)$ with the above knowledge. $GF(2^2)$ contains four elements. Let's call them $\{0,1, \alpha, \beta \}$. $g$ must be $x^2 + x + 1$ as there no other irreducable polynom of degree 2. So far I am able to construct the addition table partly (question marks indicating despair...):



| + | 0 1 $\alpha$ $\beta$ |



0 | 0 1 $\alpha$ $\beta$ |




1 | 1 0 ? ? |



$\alpha$ | $\alpha$ ? ? ? |



$\beta$ | $\beta$ ? ? ? |



I don't understand how to calculate for example $1+\alpha$. The result is $\beta$, but I don't know why. Concerning the above short explanation, I have to divide $1+\alpha$ by $g$. But how can I do this?



Thanks for your help

soft question - Surprising identities / equations

What are some surprising equations/identities that you have seen, which you would not have expected?




This could be complex numbers, trigonometric identities, combinatorial results, algebraic results, etc.



I'd request to avoid 'standard' / well-known results like $ e^{i \pi} + 1 = 0$.



Please write a single identity (or group of identities) in each answer.



I found this list of Funny identities, in which there is some overlap.

modular arithmetic - How to calculate $2 ^ {-1} pmod{10}$

I want to know how to compute the inverse of a number when the module is composite and the number is not coprime.



Can anyone give me the options with an example of how to compute with $2 ^ {-1} \pmod{10}$?



Is there a way to do factorisation or some similar technique that ends with the same result like: $1/9 \pmod{10} = 1/3 \times 1/3 \pmod{10}$ and because 3 is comprime of 10 then is possible?



Thanks

summation - How does the sum of the series “$1 + 2 + 3 + 4 + 5 + 6ldots$” to infinity = “$-1/12$”?

(I was requested to edit the question to explain why it is different that a proposed duplicate question. This seems counterproductive to do here, inside the question it self, but that is what I have been asked by the site and moderators. There is no way for me to vote against their votes. So, here I go: Please stop voting this as a duplicate so quickly, which will eventually lead to this question being closed off. Yes, the other question linked to asks the same math, but any newcomer to the problem who was exposed to it via physics, as I was, will prefer this question instead of the one that is purely mathematically. I beg the moderators to not be pedantic on this one. This question spills into physics, which is why I did the cross post to the physics forum as well.)



How does the sum of the series “1 + 2 + 3 + 4 + 5 + 6…” to infinity = “-1/12”, in the context of physics?




I heard Lawrence Krauss say this once during a debate with Hamza Tzortzis (http://youtu.be/uSwJuOPG4FI). I found a transcript of another debate between Krauss and William Lane Craig which has the same sum. Here is the paragraph in full:




Let’s go to some of the things Dr. Craig talked about. In fact, the
existence of infinity, which he talked about which is
self-contradictory, is not self-contradictory at all. Mathematicians
know precisely how to deal with infinity; so do physicists. We rely on
infinities. In fact, there’s a field of mathematics called “Complex
Variables” which is the basis of much of modern physics, from
electro-magnetism to quantum mechanics and beyond, where in fact we

learn to deal with infinity; without the infinities we couldn’t do the
physics. We know how to sum infinite series because we can do complex
analysis. Mathematicians have taught us how. It’s strange and very
unappetizing, and in fact you can sum things that look ridiculous. For
example, if you sum the series, “1 + 2 + 3 + 4 + 5 + 6…” to infinity,
what’s the answer? “-1/12.” You don’t like it? Too bad! The
mathematics is consistent if we assign that. The world is the way it
is whether we like it or not.





-- Lawrence Krauss, debating William Lane Craig, March 30, 2011



Source: http://www.reasonablefaith.org/the-craig-krauss-debate-at-north-carolina-state-university



CROSS POST: I'm not sure if I should post this in mathematics or physics, so I posted it in both. Cross post: https://physics.stackexchange.com/questions/92739/how-does-the-sum-of-the-series-1-2-3-4-5-6-to-infinity-1-12



EDIT: I did not mean to begin a debate on why Krauss said this. I only wished to understand this interesting math. He was likely trying to showcase Craig's lack of understanding of mathematics or logic or physics or something. Whatever his purpose can be determined from the context of the full script that I linked to above. Anyone who is interested, please do. Please do not judge him out of context. Since I have watched one of these debates, I understand the context and do not hold the lack of a full breakdown as being ignorant. Keep in mind the debate I heard this in was different from the debate above.

Sunday, February 21, 2016

measure theory - Computation of $lim_{n rightarrow infty} n(int_{0}^{infty} frac{1}{1+x^4+x^n} mathrm{d}m(x)-C)$



Let $m(x)$ be the Lebesgue measure. I want to show that there exists $C\in \mathbb R$ such that
$$\lim_{n \rightarrow \infty} n\left(\int_{0}^{\infty} \frac{1}{1+x^4+x^n} \mathrm{d}m(x)-C\right)$$
exists as a finite number and then compute the limit. Rewriting this gives:




$$\lim_{n \rightarrow \infty}\int_{0}^{\infty}n \left(\frac{1-C(1+x^4+x^n)}{1+x^4+x^n}\right)\mathrm{d}m(x)$$



And I thought I should use the Lebesgue dominated convergence theorem somehow to put the limit inside the integral (because calculating an integral of a rational function like this doesn't seem like a good idea) but I can't find any dominating function. Also I tried splitting into the intervals $(0,1)$ and $(1,\infty)$ but this didn't help me either... Any help is appreciated!


Answer



Write the integral as
$$\int_0^1 \frac{dx}{1+x^4 + x^n} + \int_1^\infty\frac{dx}{1+x^4 + x^n}.$$
The second integral goes to $0,$ while the first goes to $\int_0^1 \frac{dx}{1+x^4},$ so $$C = \int_0^1 \frac{dx}{1+x^4} = \frac{\pi + 2 \mathop{acoth}(\sqrt{2})}{4\sqrt{2}}.$$
Now that we know what $C$ is, we need to estimate the error.




The second integral is bigger than $\int_1^\infty x^{-n} dx = \frac{1}{n+1},$ and smaller than $(1+\epsilon) \int_1^\infty x^{-n} dx,$ for any $x,$ so the second integral contributes $1$ to the limit.



The first integral less $C$ equals $$\int_0^1 \frac{x^n d x}{1+x^4 + x^n},$$ which can be explicitly evaluated as
$$
\frac{1}{8} \left(\psi ^{(0)}\left(\frac{n+5}{8}\right)-\psi
^{(0)}\left(\frac{n+1}{8}\right)\right),
$$
where $\psi^{(0)}$ is the digamma function, and is asymptotic to $1/2n,$ so the limit is $3/2.$ If the digamma does not make you happy, it is easy to see that the limit is $\frac{1}{2n}$ for the first integral. This is so because for any $\epsilon,$ the integral from $0$ to $1-\epsilon$ decreases exponentially in $n,$ and near $1$ the estimate is easy to get, by approximating $1+x^4$ by $2.$


field theory - Confusion on Artin's Theorem (Linear Independence of Group Homomorphism)



The Artin's Theorem states as follows:




Let $G$ be a group. and let $f_1,\dots, f_n\colon G\to K^{\times}$ be distinct homomorphisms of $G$ into the multiplicative group of a field. Prove that these functions are linearly independent over $K$ Namely, if $a_1,a_2,\dots\in K^{\times}$, then $\forall g\in G$, $$a_1f_1(g)+\cdots+a_nf_n(g)=0$$ implies $a_1=a_2=\cdots=a_n=0$.





Let $G=\mathbb{Z}_5^{\times}$ group of units $U(5)$, and let $K=\mathbb{Z}_5$, hence $K^{\times}=\mathbb{Z}_5 \sim \{0\}$.



Now, consider $f_1(x)=x$ be the identity map and let $f_2(x)$ defined as follows:



$1\mapsto1\\2\mapsto3\\3\mapsto2\\4\mapsto4.$



$f_1$ is the identity mapping basically from $\mathbb{Z}_5^{\times}$ to $\mathbb{Z}_5^{\times}$ hence trivially homomorphism. We show that $f_2$ is also a homomorphism (note that all these calculations are in modulo $5$):




$f_1(2)f_1(2)=3\cdot3=4=f_1(4)=f_1(2\cdot2)
\\
f_1(2)f_1(3)=3\cdot2=1=f_1(1)=f_1(2\cdot3)
\\
f_1(3)f_1(3)=2\cdot2=4=f_1(4)=f_1(3\cdot3)
\\
f_1(2)f_1(4)=3\cdot4=2=f_1(3)=f_1(2\cdot4)
\\
f_1(3)f_1(4)=2\cdot4=3=f_1(2)=f_1(3\cdot4)
\\

f_1(4)f_1(4)=4\cdot4=1=f_1(1)=f_1(4\cdot4).
$



Then the theorem is clearly false since $f_1(2)+f_2(2)=2+3=0$, which is not linearly independent.



This theorem has been proven like decades ago so it must be my reason that is wrong, am I understanding the theorem incorrect or am I missing something here?



Help would be appreciated!



(Found out that this theorem has been proven here)



Answer



For a set of maps, $f_1, f_2, \dots, f_n$ to be linearly independent over $K^\times$, it must be the case that the only solution to
$$ \forall g \in G, k_1 f_1(g) + k_2 f_2(g) + \cdots + k_n f_n(g) = 0_K $$
where the $k_i \in K$ for all $i$ and $0_K$ is the additive identity in $K$, is $k_1 = k_2 = \cdots = k_n = 0$.



Note that this does not say that it is enough that $k_1 f_1(g) + k_2 f_2(g) + \cdots + k_n f_n(g) = 0_K$ for one choice of $g \in G$. It must be the case simultaneously for all $g \in G$ (because this is not a statement about the images of elements being independent; it is a statement about functions being independent).



So, for your two functions to be linearly independent, the following set of equations must only be simultaneously satisfied with the $a_i$ are all zero:\begin{align*}
a_1 f_1(1) + a_2 f_2(1) = a_1 + a_2 = 0 \\
a_1 f_1(2) + a_2 f_2(2) = 2a_1 + 3a_2 = 0 \\

a_1 f_1(3) + a_2 f_2(3) = 3a_1 + 2a_2 = 0 \\
a_1 f_1(4) + a_2 f_2(4) = 4a_1 + 4a_2 = 0 \\
\end{align*}

The first and fourth force $a_1 = -a_2$. But using that in the second gives $a_2 = 0$, so that $a_1 = 0$. I.e., the only way all of these e simultaneously satisfied is when the $a_i$ are all zero.






If you want this to look like linear algebra, then replace $f_i(g)$ with the vector $v_i = (f_i(g): g \in G)$, where we assume $G$ has been well-ordered and the elements of that vector are ordered in the same way. Then we ask that the set $\{v_i : i=1, \dots, n\}$ is linearly independent over $K$. That is, we do not look at the $f_i$ on a single $g \in G$. Instead, we treat each $f_i$ as equivalent to its $G$-indexed sequence of images.



So, for your two $f$s, the vectors whose independence we need to resolve are

$$(1,2,3,4)$$
and
$$(1,3,2,4) \text{.}$$
Then linear independence requires the solution to
$$ a_1 (1,2,3,4) + a_2 (1,3,2,4) = (a_1 + a_2, 2 a_1 + 3a_2, 3a_1 + 2a_2, 4a_1 + 4a_2) = (0,0,0,0) $$
is $a_1 = a_2 = 0$. By the same argument as used above the horizontal divider, the only way this equation is satisfied is when $a_1 = a_2 = 0$.


geometry - New SAT Math Section: Pythagorean Theorem on Soccer Fields

So I attempted this problem and I'm very sure I'm doing it right but I keep getting it wrong as my answer choice is not even one of the answer choices listed. There is a picture that goes with the problem that I have attached.



Question: The picture below shows the dimensions of a soccer
field. Let x be the distance from the northwest corner
to the center of the eastern side and let y be the
distance from the northwest corner to the southeast
corner. To the nearest meter, what is y – x?



Image for the Question




So what I did was I drew out the triangles, the smaller one would have a horizontal side length of 90 m and a vertical side length of 40 m so by the pythagorean theorem, the x value (the hypotenuse) would be 98.49 m. I did the same with the larger triangle that had side lengths of 120 m and 80 m giving me a y of 144.22 m. Calculating y - x results in 45.73 m rounded to 46 m which is not an answer choice. The correct answer choice is actually A: 18 m. I'm not sure if the answer key is wrong or if I'm doing the problem wrong.

calculus - Proving that an additive function $f$ is continuous if it is continuous at a single point



Suppose that $f$ is continuous at $x_0$ and $f$ satisfies $f(x)+f(y)=f(x+y)$. Then how can we prove that $f$ is continuous at $x$ for all $x$? I seems to have problem doing anything with it. Thanks in advance.


Answer



Fix $a\in \mathbb{R}.$




Then



$\begin{align*}\displaystyle\lim_{x \rightarrow a} f(x) &= \displaystyle\lim_{x \rightarrow x_0} f(x - x_0 + a)\\ &= \displaystyle\lim_{x \rightarrow x_0} [f(x) - f(x_0) + f(a)]\\& = (\displaystyle\lim_{x \rightarrow x_0} f(x)) - f(x_0) + f(a)\\ & = f(x_0) -f(x_0) + f(a)\\ & = f(a).
\end{align*}$



It follows $f$ is continuous at $a.$


Question regarding the Cauchy functional equation



Is it true that, if a real function $f$ satisfies $f(x+y) = f(x) + f(y)$ and vanishes at some $k \neq 0$, then $f(x) = 0$? Over the rationals(or, allowing certain conditions like continuity or monotonicity), this is clear since it is well known that the only solutions to this equation are functions of the form $f(x) = cx$. The reason I'm asking is to see whether or not there's "weird" solutions other than the trivial one.



Some observations are that $f(x) = f(x+k) = -f(k-x)$. $f$ is periodic with $k$.




It is easy to see that at $x=\frac{k}{2}$ the function also vanishes, and so, iterating this process, the function vanishes at and has a "period" of $\frac{k}{2^n}$ for all $n$. If the period can be made arbitrarily small, I want to say that implies the function is constant, but of course I don't know how to preclude pathological functions.


Answer



The values of $f$ can be assigned arbitrarily on the elements of a Hamel basis for the reals over the rationals, and then extended to all of $\mathbb{R}$ by $\mathbb{Q}$-linearity. So (assuming the Axiom of Choice) there are indeed weird solutions.


Saturday, February 20, 2016

algebra precalculus - Proof verification: show that $x - a = b$ can be rewritten as $x = b + a$.



I am beginning an introductory college math course to catch up from my bad high school education. This is one my first proofs.





Prove that $x - a = b$ can be rewritten as $x = b + a$.




We have been given the properties of the operations of the set of the real numbers (not sure how to latex that).



My proof is this:



$x - a - (-a) = b - (-a)$




$x - 0 = b + a$



$x = b + a$



I'm not completely sure this is correct.



I have another, more important and general doubt. In the proof I use the fact that adding something to both sides of an equation does not change the equation. Do I need to prove this, since no proof has been given in this course, if I want to use it? We are proving very intuitively obvious theorems, so I'm not sure what other intuitively obvious theorems I can use without proving first!



Here's an attempt:




Theorem: adding $x \in R$ to both sizes of an equation does not change the equation.



If $a, b$ are real numbers and $a = b$, then $a$ and $b$ are the same. $a + x = b + x$ can then be rewritten as $a + x = a + x$, since a = b. Both sides are the same, so $a + x = b + x$.



Here I'm not sure how to say that a bunch of operations in the real numbers is a real number. This also should work for all operations we haven't mentioned yet: if $a^{2/3} = b^{4/7}$, then $a^{2/3} + c = b^{4/7} + c$. I'm not sure if this complicates things, but I have no idea how to say this either way.



Let's also ignore for a second that this is an introductory course. Would I need to prove this if I was asked to prove the theorem in an exam?


Answer




$x - a - (-a) = b - (-a)$





What is "$-$"? Well, we do learn it to be an operation, and indeed you can define subtraction of reals because $\mathbb R$ is additive group, and then by definition it means that for all real $x$ there exists a real denoted by $-x$ with property $x+(-x) = -x+x = 0$. This allows us to define subtraction:



$$x-y:=x+(-y).$$



However, there are subtleties here, the first one being that additive inverse is unique (which can be easily proved from group axioms) and therefore the above will really be well defined operation.



But, what properties of subtraction do you know? For example, it is neither associative nor commutative and, because of that, ill-suited for proofs like these where you need to carefully pay attention to axioms.




More importantly, it is a mystery why you would choose to subtract $-a$ instead of just adding $a$. Note that $x-a-(-a) = x-a+a$.



Furthermore, what does $x-a-(-a)$ really mean? It should actually be $(x-a)-(-a)$ and after that one should invoke associativity to simplify.



With the above comments in mind, what I would do is the following:



\begin{align}
x-a = b &\implies x+(-a) = b\\
&\implies (x+(-a))+a = b+a\\
&\implies x+(-a+a) = b+a\\

&\implies x+0 = b+a\\
&\implies x = b+a\\
\end{align}



You should be able to recognize what axiom was used in each line, except maybe the second line. So, what is going on there?



Well, reals come equipped with binary operation $+\colon\mathbb R\times\mathbb R\to \mathbb R$, either by construction or from axiomatic definition. Binary operation is by definition a function, so, for any real number $a$, we can define a new function $f_a\colon\mathbb R\to\mathbb R$ with formula $f_a(x) = x + a$. It is important that $f_a$ is function, since all functions must satisfy $$x = y\implies f(x) = f(y)$$
and letting $f = f_a$, we get



$$x = y \implies x + a = y + a,$$




which is what we did in the above second line.


proof verification - Brauer characters: independence over $overline{mathbb{Q}}$ implies independence over $mathbb{C}$




The proof of theorem in title has already been sketched in a question on MathStack (here); I tried to write the detailed proof, and I want to check it.



Let $\phi_1,\cdots,\phi_n$ be all the Brauer characters of a finite group $G$ (so they have domain $G_{p'}=\{g\in G: p\nmid o(g)\}$, and values in $\mathcal{O}\subseteq \mathbb{C}$, the ring of algebraic integers; $p$ is the characteristic of field $F$ over which the inequivalent irreducible $F$-representations of $G$ are defined with above Brauer characters.



Claim: Suppose $\phi_1,\cdots,\phi_n$ are independent over $\overline{\mathbb{Q}}$; then they are independent over $\mathbb{C}$.



Proof: (1) First if $|G_{p'}|=m$, then $m$ should be $\leq n$, otherwise $\phi_i$'s can-not be independent.



(2) Let $G_{p'}=\{g_1,g_2,\cdots,g_m\}$. For each $g_i$ we associate a row-vector
$$[\phi_1(g_i),\phi_2(g_i),\cdots,\phi_n(g_i)]\in \mathbb{\overline{Q}}^n.$$

We get $m$ vectors $\{v_{g_1}, v_{g_2},\cdots,v_{g_m}\}$ in $\overline{\mathbb{Q}}^n$, and $m\leq n$.



(3) The $\overline{\mathbb{Q}}$-independence of $\phi_i$'s is equivalent to say that these $m$ rows in $\overline{\mathbb{Q}}^n$ are independent over $\overline{\mathbb{Q}}.$ Extend this set to a basis of $\overline{\mathbb{Q}}^n$:
$$\{v_{g_1}, v_{g_2},\cdots,v_{g_m}, w_{m+1},\cdots, w_n\}.$$
(4) The matrix $P$ formed by these $n$ vectors as rows of a matrix $M_n(\overline{\mathbb{Q}})$ will be invertible.



(5) Hence the matrix $P$ as an element of $M_n(\mathbb{C})$ will be invertible.



(6) Hence $\phi_1,\cdots,\phi_n$ as functions into $\mathbb{C}$ are $\mathbb{C}$-independent.




Is this proof correct?


Answer



Your proof looks fine to me. One thing I'll point out is that this really has nothing to do with the fact that you are dealing with Brauer characters.



You have some vectors $v_1, v_2, ... , v_n$ in a finite dimensional vector space $V$ (here class functions on $G_{p'}$) over one field $k$ (in this case $\bar{\mathbb{Q}}$) that are linear independent and you want to make sure they remain linearly independent when you extend scalars to a field extension $K$ ($\mathbb{C}$ in this case). More precisely you want to know that $v_1\otimes 1, v_2 \otimes 1, ... $ are linearly independent inside $V \otimes K$. Your proof works fine in this level of generality.



I'll note that in fact $V$ need not be finite dimensional for this result to hold, your proof would need to be modified a bit for the infinite dimensional case (as you complete to a basis and show a certain matrix is invertible) but it's not that hard to fix.


What's wrong with that proof?



What wrong with this proof?




$(-1)=(-1)^{\frac{2}{2}}=(-1)^{2\times \frac{1}{2}}=\sqrt{1}=1$ then $1=-1$


Answer



$x^{\frac{1}{2}}$ is a multiple-valued "function", since in general $x$ has two square roots. One could also write:



$$\sqrt1=-1$$


algebra precalculus - How to find the least number of objects in a problem involving profits?

The problem is as follows:




In an electronics factory, the owner calculates that the cost to
produce his new model of portable TV is $26$ dollars. After meeting
with the distributors, he agrees the sale price for his new product to
be $25$ dollars each and additionally $8\%$ more for each TV set sold
after $8000$ units. What is the least number of TV's he has to sell in
order to make a profit?.





The answers are:




  • 16000

  • 15001

  • 16001

  • 15999

  • 17121




This problem has made me to go in circles on how to express it in a mathematical expression. I'm not sure if it does need to use of inequations.



What I tried to far is to think this way:



The first scenario is what if what he sells is $8000$ units, then this would become into:



$$\textrm{production cost:}\,26\frac{\$}{\textrm{unit}} \times 8000\,\textrm{units} = 208000\,\$$$




$$\textrm{sales:}\,25\frac{\$}{\textrm{unit}} \times 8000\,\textrm{units}=\,200000\,\$$$



Therefore there will be an offset of $8000\,\$$ as



$$208000\$-200000\$\,=\,8000\,\$$$



So I thought what If I consider the second part of the problem which it says that he will receive an additional of $8\%$ after $8000$ units.



Therefore his new sale price will be $27\,\$$ because:




$$25+\frac{8}{100}\left(25\right )=27\,\$$$



So from this I thought that this can be used in the previous two relations.
But how?.



I tried to establish this inequation:



$$26\left(8000+x\right)<25\left(8000\right)+27\left(8000+x\right)$$



But that's where I'm stuck at since it is not possible to obtain a reasonable result from this as one side will be negative and the other positive.




The logic I used was to add up $8000\,\$$ plus something which is the production cost must be less than what has been obtained from selling the first $8000$ units plus a quantity to be added to those $8000$.



However there seems to be an error in this approach. Can somebody help me to find the right way to solve this problem?

Friday, February 19, 2016

calculus - $dx=frac {dx}{dt}dt $. Why is this equality true and what does it mean?

$dx=\frac {dx}{dt}dt $. I know that this deduction is obvious from the chain rule, given that we treat our dx and dt as just numbers. But I find it quite unsatisfactory to think of it in that sense. Is there a better / more "calculus-inclined" way of thinking about this equality. Can you please explain both the LHS and RHS individually.

general topology - $mathbb S^3=mathbb Rmathrm P^3$? Find mistake in the proof.


It is known that the sphere $\mathbb S^3$ may be decomposed as two solid tori. It may be done with the following "parametrization" $f:(\mathbb S^1\times \mathbb S^1 \times [0,1])_{/\sim}\to \mathbb S^3\subseteq \mathbb R^4$: $$f(\alpha, \beta , t) = (\cos \frac{t\pi}{2} \cdot \alpha,\ \sin \frac{t\pi}{2} \cdot \beta).$$


There must be a mistake somewhere, but I can't find it: I parametrized $\textrm{SO}(3)$ (which is the same as $\mathbb R\textrm P^3\neq \mathbb S^3$) in the same way.



Any matrix from $\textrm{SO}(3)$ is uniquely determined by its two first columns.


Assume that the first column $A$ is different from the poles $\pm(0,0,1)$. Then the second column $B$ can be described in terms of angle. Let's look at $B$ as a vector tangent to the sphere $\mathbb S^2$ at point $A$. Then, $0$-angle corresponds to vector $B$ pointing east, $\pi/2$ corresponds to $B$ pointing north and so on.


So this part of $\textrm{SO}(3)$ may be paramtrized by $\mathbb S^2\setminus \{\pm(0,0,1)\} \times \mathbb S^1 \simeq (-1,1) \times \mathbb S^1 \times \mathbb S^1$ (with coordinate functions: $t,\varphi, \psi$).


Now assume that $A=\pm(0,0,1)$. Parametrize $B$ with angle in the following way: $B(\phi)=(0,\cos(\phi+\frac{\pi}{2}), \sin(\phi+\frac{\pi}{2}))$. How to glue it with the previous one? Let:


$$ B(-1,\varphi, \psi) = B(\varphi+\psi),\\ B(1, \varphi, \psi) = B(\varphi-\psi).$$


If we reparametrize the torus $\mathbb S^1\times \mathbb S^1$ with $\alpha = \varphi + \psi$ and $\beta = \varphi - \psi$, we can see that function $g=(A,B)(t,\alpha,\beta):[-1,1] \times \mathbb S^1 \times \mathbb S^1 \to \textrm{SO}(3)$ identifies the same points as $f$ did (with obvious corrections on order of the factors and length of the interval).


The proof relies on the three: 1) any continuous map from a compact space $X$ onto a Hausdorff space $Y$ induces a homeo between the obvious quotient $X_{/\sim}$ and $Y$, 2) $B$ is continuous, 3) the identifications are the same.


What am I missing?


Answer



Since savick01 already wrote a self-answer describing where the proof goes wrong, I'll concentrate in this answer on a more intuitive picture of what happens. Therefore I'll also not directly start at the mapping, but first describe some equivalences which help to visualize what happens




Mathematically, this identification is trivial. We use the standard embedding of $S^3$ into $\mathbb R^4$ as the unit sphere around the origin, given by the equation $$x^2 + y^2 + z^2 + w^2 = 1\ ,$$ where a point in $x^4$ is given by the coordinates $(x,y,z,w)$. Solving this equation for $w$ gives the solutions $$w = \pm\sqrt{1-x^2-y^2-z^2}\ .$$ Thus the equation has two solutions for $x^2 + y^2 + z^2 < 1$, one for $x^2+y^2+z^2=1$, and none for $x^2+y^2+z^2>1$. Now the equation $x^2+y^2+z^2\le 1$ describes the closed unit ball in $\mathbb R^3$. So we have two unit balls, one for positive and one for negative $w$. The exception is at the border, where $w=0$ and therefore the points of both unit balls with equal coordinates describe the very same point of $S^3$.


So far for the description. But what does it mean? To see that, we go to one dimension less and consider the sphere $S^2$, embedded in $\mathbb R^3$. Now consider the parallel projection when looking for example from positive $z$ direction (that is, from above). This projection means basically removing the $z$ coordinate. The image of the sphere is, of course, the unit disk. Now if a point on the sphere has a positive $z$ value, we can see it from above, otherwise we can't. It is customary to draw "unseen" lines on a sphere dashed in the projection. Another option is to use grey instead of black, which has the advantage that it also works for points. See the following image for an example (although you've probably already seen tons of such images):


Sphere parallel projection


This shows the disk (actually the circle which is the border of the disk), and the image of a great circle, where the front part ($z>0$) is drawn in black and the back part in grey. Now what this actually means is that there are two disks drawn on top of each other, where one disk is drawn in black, and the other in grey:


Sphere frontSphere back


The circle which forms the border of the disk of course only exists once, and only there we can get from one of the disks to the other.


Those two "front" and "back" disks for the 2-sphere are the exact equivalents to the two balls for the 3-sphere. Indeed, we also can combine those two balls into one and use different colours to mark "front" and "back". This gives a visual mental model for $S^3$.



To see what the parametrization $f$ does, we expand the circles $S^1$ into two coordinates each using the standard embedding, so we get the coordinates $$f(\alpha, \beta, t) = (\cos(t\pi/2)\cos\alpha, \cos(t\pi/2)\sin\alpha, \sin(t\pi/2)\cos\beta, \sin(t\pi/2)\sin\beta)$$



Now if we project it into our two balls, we first see that for $t=\mathrm{const}$ and $\beta=\mathrm{const}$ we get a circle with radius $\cos(t\pi/2)$ parallel to the $x$-$y$ plane with center on the $z$ axis, an height $z=\sin(t\pi/2)\cos\beta$ on one of the two spheres.


Now we also let the parameter $\beta$ run, to get the full torus. By doing so, we find that the circle just moves along the $z$ axis, until it reaches the border of the ball where it "jumps" to the other ball and moves again along the $z$ axis. That is, we get a pair of cylinders, one on each sphere, both equal. Note that each pair of corresponding vertical lines on both spheres give a circle on $S^3$; this is the sane as the latitude circles look like lines when looking at earth from the side, and the longitude circles do so when looking from a pole.


The toruses for different $t$ correspond to cylinders of different radius, just as if we had used apple corers of different size to cut through the spheres.


There are two special cases: $t=0$ and $t=1$. For $t=0$, we get the equator of the balls (the equator is shared because it's at the border). That is, the torus degenerated to a great circle. For $t=1$, we get the z axis for both balls. Note that this also is a great circle of $S^3$.



Now having explored $S^3$, let's explore the manifold corresponding to $SO(3)$, the group of rotations in the three-dimensional space. As everyone knows, a rotation can be described by its rotation axis and the rotation angle. Since an axis gives a direction and an angle gives a magnitude, one can combine both into a vector. The zero vector is then the identity, and any other vector describes a rotation of the angle given by its length around the corresponding axis. One can show that this is a continuous mapping, that is, only slightly different rotations lead to only slightly different vectors. However, a rotation of $\pi$ around an axis and a rotation of $-\pi$ around that same axis are the same transformation, therefore one has to identify both. This of course means that we can restrict ourselves to vectors of length $\le\pi$ because any longer vector would "wrap around" to the other side. Therefore the rotations are described by a ball of radius $\pi$, where antipodal points on the border sphere are identified.


Of course one can scale that ball to an unit ball.



The first vector of the rotation matrix describes the direction into which the unit vector $e_x$ is rotated. So if we define Euler angles using the $x$ direction instead of the $z$ direction and replace the angle for the $y$ rotation (which goes from $0$ to $\pi$) by its cosine, we get exactly the parametrization from the question: $A$ is determined by two angles describing a point on $S^2$, and $B$ is described by the third angle.


(To be continued; it's now 1am here :-))



Thursday, February 18, 2016

trigonometry - Resolve $A=cos{(pi/7)}+cos{(3pi/7)}+cos{(5pi/7)}$ using $u=A+iB$



With these two sums:
$$A=\cos(\pi/7)+\cos(3\pi/7)+\cos(5\pi/7)$$
$$B=\sin(\pi/7)+\sin(3\pi/7)+\sin(5\pi/7)$$



How to find the explicit value of $A$ using:





  • $u=A+iB$

  • the sum of $n$ terms in a geometric sequence: $u_0*\frac{1-q^{n+1}}{1-q}$



I know the answer is $\frac 12$ from this post, but there is no mention of this method.


Answer



Using Euler formula,



setting $2y=i\frac{\pi}7\implies e^{14y}=-1$




$$A+iB=\sum_{r=0}^2e^{(2r+1)2y}=e^{2y}\cdot\dfrac{1-(e^{4y})^3}{1-e^{4y}}=\dfrac{e^{2y}+1}{1-e^{4y}}=\dfrac1{1-e^{2y}}$$



Now, $\displaystyle\dfrac1{1-e^{i2u}}=\dfrac{-e^{-iu}}{e^{iu}-e^{-iu}}=-\dfrac{\cos u-i\sin u}{2i\sin u}=\dfrac12+i\cdot\dfrac{\cot u}2$



Now equate the real parts.


calculus - Show that $sum_{n=1}^{infty}frac{sinfrac{x}{n}sin2nx}{x^2+4n}$ converges uniformly.



How to show that the following series converges uniformly? $$ \sum_{n=1}^{\infty}u_n(x),\ \ \ u_n(x)=\frac{\sin\frac{x}{n}\sin2nx}{x^2+4n},\ \ x\in E=(-\infty;+\infty) $$



At first I tried to apply Dirichlet's test. However, I got stuck while trying to prove that $\sum_{n=1}^{\infty}\sin\frac{x}{n}\sin2nx$ is less than some fixed $M$ (multiplying and dividing by $2\sin x$ did not help much). In my other attmepts I also got stuck trying to limit the numerator. So, the problem is with these $\sin$ functions.


Answer




As $f(x)=\sum_{n=1}^{\infty}\frac{\sin\frac{x}{n}\sin2nx}{x^2+4n}$ is even, we can limit the analysis on $[0, \infty)$.


Consider $$v_n(x)=\frac{x}{n(x^2+4n)}.$$


We have $$v_n^\prime(x)=\frac{4n^2-nx^2}{n^2(x^2+4n)^2}$$


Based on that, one can prove that $v_n$ is positive on $[0,\infty)$ and attains its maximum at $x_n = 2\sqrt n$. The maximum having for value $\frac{1}{2n^{3/2}}$. As $\sum \frac{1}{n^{3/2}}$ converges, $\sum v_n(x)$ converges uniformly on $[0, \infty)$ according to Weierstrass M-test.


We then get the uniform convergence of $\sum u_n(x)$ as $\vert u_n(x) \vert \le v_n(x)$ for all $n \in \mathbb N$ for $x \in [0,\infty)$.


Wednesday, February 17, 2016

linear algebra - Given a list of axioms, can one prove that a statement is unprovable without exhibiting a model?



Suppose I am given the following axioms for a vector space $V$ over the field $F$:




For addition of vectors




  • Closure

  • Associativity

  • Existence of an additive identity

  • Existence of additive inverse

  • Commutativity




For multiplication by scalars




  • Closure

  • Associativity

  • The two distributive laws



From this I would like to prove that the statement $\forall v \: 1v = v$, where $v \in V$ and $1$ is the multiplicative identity of $F$, cannot be deduced from the above axioms.




One possible method would be to exhibit a model, that is, a set of objects, a field, with operations of addition and multiplication by the elements of the field defined, where all the above axioms are satisfied but the statement $\forall v \: 1v = v$ is not. Is there some method by which I could prove the unprovability of the statement without exhibiting a model?


Answer



For your particular example for Vector Spaces, the only thing I can think of to prove that an axiom is unproveable from the rest is to exhibit a model for which some collection of axioms hold and some other collection does not.






When your theory is sufficently strong, there exists another method to prove that a statement is unproveable. The Godel Incompleteness statements states that certain type of theories can not prove its own consistency. Peano Arithmetics, Second Order Arithmetics, Various systems of ZFC, etc are example of such theories for the incompleteness theorem holds.



An actual use of this is the proof that ZFC can not prove that there exists (weakly) inaccessible cardinals. (You can look up the definition of inaccessible cardinals and other large cardinal properties.) The idea is that if ZFC proved the existence of in inaccessible cardinal, then since inaccessible cardinal are "very large" in some sence, you can use it to prove the consistency of ZFC by producing a model of $ZFC$. This would contradict the incompleteness theorem; hence, $ZFC$ can not prove the existence of inaccessible cardinals. (Also many set theorist believe that $ZFC$ can not prove that inaccessible cardinals must not exist.)







Something a bit more down to earth: The fact that you can not lose the Hydra Game is unproveable in Peano Arithmetic. Basically, the Hydra Game is that you have a Tree. At each step $n$, you cut some node of the tree. You go down one node, and duplicate what left on that node $n$ times. To win the game you need to cut off all the "heads". See this website for a better description : http://math.andrej.com/2008/02/02/the-hydra-game/



You can try to prove using sufficiency strong mathematics that whatever you do, you will eventually win. However, Peano Arithemtics can not prove this result. The fact that you can not lose the Hydra Game can be used to prove the consistency of Peano Arithemtics. Again by the incompleteness theorem, Peano Arithmetics can not prove that you can never lose the Hydra Game.


Tuesday, February 16, 2016

real analysis - How to prove that exponential grows faster than polynomial?


In other words, how to prove:



For all real constants $a$ and $b$ such that $a > 1$,


$$\lim_{n\rightarrow\infty}\frac{n^b}{a^n} = 0$$



I know the definition of limit but I feel that it's not enough to prove this theorem.



Answer



We could prove this by induction on integers $k$:


$$ \lim_{n \to \infty} \frac{n^k}{a^n} = 0. $$


The case $k = 0$ is straightforward. I will leave the induction step to you. To see how this implies the statement for all real $b$, just note that every real number is less than some integer. In particular, $b \leq \lceil b \rceil$. Thus,


$$ 0 \leq \lim_{n \to \infty} \frac{n^b}{a^n} \leq \lim_{n \to \infty} \frac{n^{\lceil b \rceil}}{a^n} = 0. $$


The first inequality follows since all the terms are positive. The last equality follows from the induction we established previously.


functions - Does $f(X setminus A)subseteq Ysetminus f(A), forall Asubseteq X$ imply $f$ is injective ?



I know that if $f:X\to Y$ is injective then $f(X \setminus A)\subseteq Y\setminus f(A), \forall A\subseteq X$ . Is the converse true i.e.



if $f:X \to Y$ is a function such that $f(X \setminus A)\subseteq Y\setminus f(A), \forall A\subseteq X$ , then is it true that $f$ is



injective ?


Answer




Yes. We prove the contrapositive. Suppose that $f$ is not injective; then there are distinct $x_0,x_1\in X$ such that $f(x_0)=f(x_1)$. Let $A=\{x_0\}$; then



$$f[X\setminus A]=f[X]\nsubseteqq Y\setminus f[A]=Y\setminus\{f(x_0)\}\;.$$


Monday, February 15, 2016

sequences and series - Blocks of Pyramid Pattern Expression





There is a pattern following, and trying to find the algebraic expression



Each layer (from the top).



Diagram.




enter image description here



So the first layer has 1, second has 4, third has 9, and the fourth has 16.



That's how the sequence is increasing.



What I'm looking for is,



When the second layer is added with the first layer,




Third layer is added with the second and first,



Fourth is added with third,second and first.



So something like this.



enter image description here



enter image description here




I am trying to find the algebraic expression for this pattern.



Any ideas??



Thank you


Answer



There is a well-known formula for the sum of the first $n$ squares, but I don't want spoil your investigation, so I will give you some hints.



First, compute some more terms of the sequence. Three or four more should do.




Multiply all the terms by six, and factor the results. Notice that all of them are multiple of $n$ and $n+1$.


Sunday, February 14, 2016

limits - What is $lim_{x to infty} x^2f(x)$ where $int_0^infty f(x),mathrm{d}x = 1 $

I encountered a question in a test:




If $F(x)$ is a cumulative distribution function of a continuous non-negative random variable $X$, then find $\int_0^\infty (1 - F(x))\,\mathrm{d}x$



After a bit of pondering, I thought that the answer should depend upon the density of the random variable, so I checked the "none of these" option , but the correct answer was $\,E(X)$.So later I tried to work out the question properly.


If the density of random variable $X$ is $f(x)$ then it is necessary that $f(x) > 0$ and $\int_0^\infty f(x)\,\mathrm{d}x = 1$ Doing the integration by parts $$\left. x\Big(1-F(x)\Big)\right|_0^\infty - \int_0^\infty x\left(-\frac{\mathrm{d}}{\mathrm{d}x}F(x)\right)\,\mathrm{d}x$$ which reduces to $$ \lim_{x\to \infty} x\big(1\,- F(x)\big)\;+ \int_0^\infty xf(x) \, \mathrm{d}x $$


Now the $\int_0^\infty xf(x)\,\mathrm{d}x$ is clearly $E(X)$ but the limit is where my problem arises. Applying L'Hopital rule in the limit we have $$\lim_{x\to\infty}\frac{1-F(x)}{\frac{1}{x}} = \lim_{x\to\infty}\;x^2f(x)$$. Now is there any way to further reduce that limit to $0$ so that $E(X)$ is the correct answer or am I doing something wrong?

integration - Prove $int_0^1 frac{x-1}{(x+1)log{x}} text{d}x = log{frac{pi}{2}}$


Prove $$\int_0^1 \frac{x-1}{(x+1)\log{x}} \text{d}x = \log{\frac{\pi}{2}}$$


Tried contouring but couldn't get anywhere with a keyhole contour.


Geometric Series Expansion does not look very promising either.


Answer



Hint. One may set $$ f(s):=\int_0^1 \frac{x^s-1}{(x+1)\log{x}}\: \text{d}x, \quad s>-1, \tag1 $$ then one is allowed to differentiate under the integral sign, getting $$ f'(s)=\int_{0}^{1}\frac{x^s}{x+1}\:dx=\frac12\psi\left(\frac{s}2+\frac12\right)-\frac12\psi\left(\frac{s}2+1\right), \quad s>-1, \tag2 $$where we have used a standard integral representation of the digamma function.


One may recall that $\psi:=\Gamma'/\Gamma$, then integrating $(2)$, observing that $f(0)=0$, one gets



$$ f(s)=\int_0^1 \frac{x^s-1}{(x+1)\log{x}}\: \text{d}x=\log \left(\frac{\sqrt{\pi}\cdot\Gamma\left(\frac{s}2+1\right)}{\Gamma\left(\frac{s}2+\frac12\right)}\right), \quad s>-1, \tag3 $$




from which one deduces the value of the initial integral by putting $s:=1$, recalling that $$ \Gamma\left(\frac12+1\right)=\frac12\Gamma\left(\frac12\right)=\frac{\sqrt{\pi}}2. $$


Edit. The result $(3)$ is more general than the given one.


Saturday, February 13, 2016

mathematical physics - Can different choices of regulator assign different values to the same divergent series?



Physicists often assign a finite value to a divergent series $\sum_{n=0}^\infty a_n$ via the following regularization scheme: they find a sequence of analytic functions $f_n(z)$ such that $f_n(0) = a_n$ and $g(z) := \sum_{n=0}^\infty f_n(z)$ converges for $z$ in some open set $U$ (which does not contain 0, or else $\sum_{n=0}^\infty a_n$ would converge), then analytically continue $g(z)$ to $z=0$ and assign $\sum_{n=0}^\infty a_n$ the value $g(0)$. Does this prescription always yield a unique finite answer, or do there exist two different sets of regularization functions $f_n(z)$ and $h_n(z)$ that agree at $z=0$, such that applying the analytic continuation procedure above to $f_n(z)$ and to $h_n(z)$ yields two different, finite values?


Answer



The way you have the question written, the procedure can absolutely lead to different, finite, results depending on one's choice of the $f_n(z)$. Take the simple example of $1-1+1-1+\ldots$. The most obvious possibility is to take $f_n(z)=\frac{(-1)^n}{(z+1)^n}$ (i.e., a geometric series), in which case

$$
g(z)=\sum_{n=0}^{\infty}f_n(z)=\sum_{n=0}^{\infty}\frac{(-1)^n}{(z+1)^{n}}=\frac{1}{1+\frac{1}{z+1}}=\frac{z+1}{z+2},
$$
where the sum converges for $|z+1|>1$, and $g(0)=1/2$. But if you don't insist on the terms forming a power series, then there are other possibilities. For instance, let $f_{2m}(z)=(m+1)^z$ and $f_{2m+1}(z)=-(m+1)^z$ (i.e., zeta-regularize the positive and negative terms separately); then $g(z)=0$ everywhere, where the sum converges for $\Re(z) < -1$ and is analytically continued to $z=0$.



By taking an appropriate linear combination of the first and second possibilities, you can get $1-1+1-1+\ldots$ to equal any value at all. Specifically, taking
$$
f_n(z)=(-1)^n \left(\frac{2\beta}{(z+1)^n}+(1-2\beta)\left\lceil\frac{n+1}{2}\right\rceil^z\right),
$$
you find $g(z)=2\beta(z+1)/(z+2)$, convergent in an open region of the left half-plane, and $g(0)=\beta$.



trigonometry - Limits of cosine and sine





When $\theta$ is very small why $\sin \theta$ is similar to $\theta$ and $\cos\theta$ similar to $1$? Is it related to limits or we can prove it simply by using diagrams?


Answer



On the unit circle, $\theta$ is the length of the arc (as well as the angle extended by that arc). (Thus, perimeter of the unit circle is $2\pi$). Whereas, $\cos\theta$ is the length of the $X$ intercept, and $\sin\theta$ is the length of the $Y$ intercept.



Look at the following diagram:
enter image description here



You can now easily visualize that when Point P approaches closer to $(1,0)$, then $\theta \rightarrow \ 0$. At this time, the arc in question will become almost a vertical line, and the $Y$ intercept of the arc is almost the same length as the arc.



Hence as $\theta \rightarrow \ 0$ then $\sin\theta \rightarrow \theta$




And, at that time, the length of the $X$ intercept will get closer and closer to $1$.



Hence as $\theta \rightarrow \ 0$ then $\cos\theta \rightarrow 1$



Also, from this figure, you can easily visualize that when Point P approaches $(0,1)$, the $Y$ intercept will approach $1$ and the $X$ intercept will have same length as the length of the remaining part of the arc (from point P to point $(0,1)$)
which is $(\frac{\pi}{2} - \theta)$. (Remember that total length of the arc from $(1,0)$ to $(0,1)$ is $\frac{\pi}{2}$).



Thus, we have:




$\theta \rightarrow \frac{\pi}{2}$ then $\sin\theta \rightarrow 1$, and



$\theta \rightarrow \frac{\pi}{2}$ then $\cos\theta \rightarrow (\frac{\pi}{2}-\theta)$


Power of 2 with equal number of decimal digits?


Does there exist an integer $n$ such that the decimal representation of $2^n$ have an equal number of decimal digits $\{0,\dots,9\}$, each appearing 10% of the time?


The closest I could find was $n=1,287,579$ of which $2^n$ has 387,600 digits broken down as


0  38,808   10.012%

1 38,735 9.993%
2 38,786 10.007%
3 38,751 9.997%
4 38,814 10.014%
5 38,713 9.987%
6 38,731 9.992%
7 38,730 9.992%
8 38,709 9.986%
9 38,823 10.016%

Answer




No. If each digit appears $x$ times, then the sum of all the digits will be $45x$; this implies $3|2^n$ which cannot be the case.


real analysis - Show $(1+frac{1}{3}-frac{1}{5}-frac{1}{7}+frac{1}{9}+frac{1}{11}-cdots)^2 = 1+frac{1}{9}+frac{1}{25}+frac{1}{49} + cdots$



Last month I was calculating $\displaystyle \int_0^\infty \frac{1}{1+x^4}\, dx$ when I stumbled on the surprising identity:




$$\sum_{n=0}^\infty (-1)^n\left(\frac{1}{4n+1} +\frac{1}{4n+3}\right) = \frac{\pi}{\sqrt8}$$



and I knew



$$\sum_{n=0}^\infty \frac{1}{(2n+1)^2} = \frac{\pi^2}{8}$$



So if I could find a proof that $$\left(\sum_{n=0}^\infty (-1)^n\left(\frac{1}{4n+1} +\frac{1}{4n+3}\right)\right)^2 = \sum_{n=0}^\infty \frac{1}{(2n+1)^2}$$ then this could be a new proof that $\zeta(2)=\frac{\pi^2}{6}$. I've thought over this for almost a month and I'm no closer on showing this identity.



Note: Article on the multiplication of conditionally convergent series: http://www.jstor.org/stable/2369519


Answer




Let $a_k = (-1)^k \left(\frac{1}{4k+1} + \frac{1}{4k+3}\right)$ and $b_k = \frac{1}{(4k+1)^2} + \frac{1}{(4k+3)^2}$. The goal is to show that: $$ \left(\sum_{i=0}^\infty a_i\right)^2 = \sum_{i=0}^\infty b_i $$
The key observation that I missed on my previous attempt is that: $$ \sum_{i=0}^n a_i = \sum_{i=-n-1}^n \frac{(-1)^i}{4i+1} $$ This transformation allows me to then mimic the proof that was suggested in the comments by @user17762.
\begin{align*} \left(\sum_{i=0}^n a_i\right)^2 - \sum_{i=0}^n b_i
&= \left(\sum_{i=-n-1}^n \frac{(-1)^i}{4i+1}\right)^2 - \sum_{i=-n-1}^n \frac{1}{(4i+1)^2} \\
&= \sum_{\substack{i,j=-n-1 \\ i \neq j}}^n \frac{(-1)^i}{4i+1}\frac{(-1)^j}{4j+1} \\
&= \sum_{\substack{i,j=-n-1 \\ i \neq j}}^n \frac{(-1)^{i+j}}{4j-4i}\left(\frac{1}{4i+1}-\frac{1}{4j+1} \right) \\
&= \sum_{\substack{i,j=-n-1 \\ i \neq j}}^n \frac{(-1)^{i+j}}{2j-2i} \cdot \frac{1}{4i+1} \\
&= \frac{1}{2}\sum_{i=-n-1}^n \frac{(-1)^i}{4i+1} \sum_{\substack{j=-n-1 \\ i \neq j}}^n \frac{(-1)^j}{j-i} \\
&= \frac{1}{2}\sum_{i=-n-1}^n \frac{(-1)^i }{4i+1}c_{i,n} \\
&= \frac{1}{2}\sum_{i=0}^n a_i \,c_{i,n}

\end{align*}
Where the last equality follows from $c_{i,n} = c_{-i-1, n}$. Since $c_{i,n}$ is a partial alternating harmonic sum, it is bounded by its largest entry in the sum: $\left| c_{i,n} \right| \le \frac{1}{n-i+1}$. We also know that $\left|a_i\right| \le \frac{2}{4i+1}$. Apply these two inequalities to get:
\begin{align*}\left| \left(\sum_{i=0}^n a_i\right)^2 - \sum_{i=0}^n b_i \right| &\le \frac{1}{2} \sum_{i=0}^n \frac{2}{4i+1} \cdot \frac{1}{n-i+1} \\ &\le \sum_{i=0}^n \frac{1}{4n+5}\left( \frac{4}{4i+1} + \frac{1}{n-i+1} \right) \\ &\le \frac{1}{4n+5}\left( 5 + \ln(4n+1) +\ln(n+1)\right) \\
& \to 0 ~\text{ as }~ n \to \infty
\end{align*}
This concludes the proof. In fact, with the same idea, you can prove this general family of identities: Fix an integer $m \ge 3$, then:



\begin{align*} & \left( 1 + \frac{1}{m-1} - \frac{1}{m+1} - \frac{1}{2m-1} + \frac{1}{2m+1} + \frac{1}{3m-1} - \cdots \right)^2 \\
=& ~ \left(\sum_{i=-\infty}^\infty \frac{(-1)^i}{im+1}\right)^2 \\
=& ~ \sum_{i=-\infty}^\infty \frac{1}{(im+1)^2} \\

=& ~ 1 + \frac{1}{(m-1)^2} + \frac{1}{(m+1)^2} + \frac{1}{(2m-1)^2} + \frac{1}{(2m+1)^2} + \frac{1}{(3m-1)^2} + \cdots \\
=& ~ \left(\frac{\frac{\pi}{m}}{\sin\frac{\pi}{m}}\right)^2 \end{align*}
The last equality follows from the comment by @Lucian.


real analysis - How do I compute $lim_{x to 0}{(sin(x) + 2^x)^frac{cos x}{sin x}}$ without L'Hopital's rule?


What I've tried so far is to use the exponent and log functions: $$\lim_{x \to 0}{(\sin(x) + 2^x)^\frac{\cos x}{\sin x}}= \lim_{x \to 0}e^ {\ln {{(\sin(x) + 2^x)^\frac{\cos x}{\sin x}}}}=\lim_{x \to 0}e^ {\frac{1}{\tan x}{\ln {{(\sin(x) + 2^x)}}}}$$.


From here I used the expansion for $\tan x$ but the denominator turned out to be zero. I also tried expanding $\sin x$ and $\cos x$ with the hope of simplifying $\frac{\cos x}{\sin x}$ to a constant term and a denominator without $x$ but I still have denominators with $x$.


Any hint on how to proceed is appreciated.


Answer



Take the logarithm and use standard first order Taylor expansions: $$ \lim_{x\to0} \frac{\log\bigl(\sin(x)+2^x\bigr)}{\tan(x)} =\lim_{x\to0} \frac{\log\bigl(\sin(x)+2^x\bigr)}{x+o(x)} =\lim_{x\to0} \frac{x+\log(2)x+o(x)}{x+o(x)} = 1+\log(2). $$ Then $$ \lim_{x\to0} \bigl(\sin(x)+2^x\bigr)^{\cot(x)} = e^{1+\log(2)} = 2e. $$



EDIT


Maybe it's important to clarify why $\log\bigl(\sin(x)+2^x\bigr)=x+\log(2)x+o(x)$. I'm using the following facts:


  • $\log(1+t) = t+o(t)$ as $t\to0$,


  • $\sin(x)+2^x = 1+x+\log(2)x+o(x)$ as $x\to0$.

Friday, February 12, 2016

math history - Why two symbols for the Golden Ratio?



Why is it that both
$\phi$
and
$\tau$
are used to designate the Golden Ratio
$\frac{1+\sqrt5}2?$


Answer



The Golden Ratio or Golden Cut is the number
$$\frac{1+\sqrt{5}}{2}$$
which is usually denoted by phi ($\phi$ or $\varphi$), but also sometimes by tau ($\tau$).



Why $\phi$ : Phidias (Greek: Φειδίας) was a Greek sculptor, painter, and architect. So $\phi$ is the first letter of his name.





The symbol $\phi$ ("phi") was apparently first used by Mark Barr at the beginning of the 20th century in commemoration of the Greek sculptor Phidias (ca. 490-430 BC), who a number of art historians claim made extensive use of the golden ratio in his works (Livio 2002, pp. 5-6).




Why $\tau$ : The golden ratio or golden cut is sometimes named after the greek verb τομή, meaning "to cut", so again the first letter is taken: $\tau$.



Source: The Golden Ratio: The Story of Phi, the World's Most Astonishing Number by Mario Livio; MathWorld


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...