Friday, September 30, 2016

trigonometry - Trigonometric identity involving double angles



If $\alpha$ and $\beta$ are acute angles and $\displaystyle{\cos2\alpha=\frac{3\cos\beta-1}{3-\cos2\beta}}$, then prove that $\displaystyle{\tan\alpha=\sqrt{2}\tan\beta}$.



I tried this question by taking the formula of $\cos2\alpha$ in terms of $\tan$ (which is of degree two) but I couldn't prove it. Please suggest some hints.


Answer



Using Weierstrass substitution in either sides.




$$\dfrac{1-\tan^2\alpha}{1+\tan^2\alpha}=\frac{3\cdot\dfrac{1-\tan^2\beta}{1+\tan^2\beta}-1}{3-\dfrac{1-\tan^2\beta}{1+\tan^2\beta}}$$



$$\implies\dfrac{1-\tan^2\alpha}{1+\tan^2\alpha}=\frac{2-4\tan^2\beta}{2+4\tan^2\beta}$$



Using Componendo and dividendo, $$\frac{\tan^2\alpha}1=\frac{4\tan^2\beta}2$$


complex analysis - Intuition behind euler's formula










Hi, I've been curious for quite a long time whether it is actually possible to have an intuitive understanding of euler's apparently magical formula: $$e^{ \pm i\theta } = \cos \theta \pm i\sin \theta$$



I've obviously seen the taylor series/differential equation based proofs, and perhaps I'm just going to have to accept that it's not possible to have an intuition on what it means to raise a number to an imaginary power. I obviously realise that the formula implies that an exponential with a variable imaginary part can be visualised as a complex function going around in a unit circle about the origin of the complex plane. But WHY is this? And why is e so special that it moves at just a fast enough rate so that the argument of the exponential is equal to the arc length of the path made by the locus (i.e. the angle in radians we've moved around the circle)? Is there any way anyone out there 'understand' this?




Thankyou!


Answer



If I recall from reading Analysis of the Infinite (very nice book, at least Volume $1$ is), Euler got it from looking at
$$\left(1+\frac{i}{\infty}\right)^{\infty}$$
whose expansion is easy to find using the Binomial Theorem with exponent $\infty$.



There is a nice supposed quote from Euler, which can be paraphrased as "Sometimes my pencil is smarter than I am." He freely accepted the results of his calculations. But of course he was Euler.


probability theory - Proof that $E(F_X(X/sigma))=frac12$ for every positive $sigma$



Define $X$ to be continuous random variable symmetric about zero with cdf $F_X$ and let $\sigma > 0$ denote a constant. Now show the following:
$$ E\left[F_X\left(\frac{X}{\sigma}\right)\right] = 0.5$$
How can one prove this claim? Since the cdf $F_X$ isn't necessarily linear, we can't place the expectation into the cdf, which would render the problem trivial. Additionally, I've concluded that the cdf is convex from $-\infty$ to $0$ and concave from $0$ to $\infty$ as it is symmetric about zero. This means we can't use Jensen's inequality either. What am I missing?


Answer



Let $f$ be the PDF of $X$, which should be symmetric about the origin; i.e. $f(-x)=f(x)$. Then, the CDF is
$$
F(x)=\int_{-\infty}^x f(t)\,\mathrm{d}t\tag{1}
$$

The symmetry of $f$ means that
$$
\begin{align}
F(-x)
&=\int_{-\infty}^{-x}f(t)\,\mathrm{d}t\\
&=\int_x^\infty f(-t)\,\mathrm{d}t\\
&=\int_x^\infty f(t)\,\mathrm{d}t\\
&=\int_{-\infty}^\infty f(t)\,\mathrm{d}t-\int_{-\infty}^xf(t)\,\mathrm{d}t\\[4pt]
&=1-F(x)\tag{2}
\end{align}

$$
The expected value is
$$
\begin{align}
E(F(X/\sigma))
&=\int_{-\infty}^\infty f(x)F(x/\sigma)\,\mathrm{d}x\tag{3}\\
&=\int_{-\infty}^\infty f(-x)F(-x/\sigma)\,\mathrm{d}x\tag{4}\\
&=\int_{-\infty}^\infty f(x)(1-F(x/\sigma))\,\mathrm{d}x\tag{5}\\
&=\frac12\int_{-\infty}^\infty f(x)\,\mathrm{d}x\tag{6}\\
&=\frac12\tag{7}

\end{align}
$$
Explanation:
$(3)$: formula for expected value
$(4)$: substitute $x\mapsto-x$
$(5)$: symmetry of $f$ and $(2)$
$(6)$: average $(3)$ and $(5)$
$(7)$: $f$ is a PDF


real analysis - Graph of discontinuous linear function is dense




$f:\mathbb{R}\rightarrow\mathbb{R}$ is a function such that for all $x,y$ in $\mathbb{R}$, $f(x+y)=f(x)+f(y)$. If $f$ is cont, then of course it has to be linear. But here $f$ is NOT continous. Then show that the set $\{{(x,f(x)) : x {\rm\ in\ } \mathbb{R}\}}$ is dense in $\mathbb{R}^2$.


Answer



Let $\Gamma$ be the graph.



If $\Gamma$ is contained in a $1$-dimensional subspace of $\mathbb R^2$, then it in fact coincides with that line. Indeed, the line will necessarily be $L=\{(\lambda,\lambda f(1)):\lambda\in\mathbb R\}$, and for all $x\in\mathbb R$ the line $L$ contains exactly one element whose first coordinate is $x$, so that $\Gamma=L$. This is impossible, because it clearly implies that $f$ is continuous.



We thus see that $\Gamma$ contains two points of $\mathbb R^2$ which are linearly independent over $\mathbb R$, call them $u$ and $v$.



Since $\Gamma$ is a $\mathbb Q$-subvector space of $\mathbb R^2$, it contains the set $\{au+bv:a,b\in\mathbb Q\}$, and it is obvious that this is dense in the plane.



functions - Is this a correct bijection between $(0,1]$ and $[0,1]$?



I need to give an explicit bijection between $(0, 1]$ and $[0,1]$ and I'm wondering if my bijection/proof is correct. Using the hint that was given, I constructed the following function $f: (0, 1] \to [0,1]$:
$$

x \mapsto \left\{ \begin{array}{ll} 2 - 2^{-i} - 2^{-i-1} - x & \text{if } x \in (1-2^{-i}, 1-2^{-i-1}]\text{ for an } i \in \mathbb{N}_0 \\
1 & \text{if } x = 1 \end{array}\right.
$$



It's easy to see that for every $x \in (0, 1)$, there exists such an $i$.



Now define $\tilde{f}: [0,1] \mapsto (0,1]$ with
$$
x \mapsto \left\{ \begin{array}{ll} 2 - 2^{-i} - 2^{-i-1} - x & \text{if } x \in [1-2^{-i}, 1-2^{-i-1})\text{ for an } i \in \mathbb{N}_0 \\
1 & \text{if } x = 1 \end{array}\right.

$$



I want to prove that $\tilde{f}(f(x)) = f(\tilde{f}(x)) = x$, so it has an inverse and therefore is a bijection. The case $x=1$ is trivial, so assume that $x \in (0,1)$ with $x \in (1-2^{-i}, 1-2^{-i-1}]$ for some $i \in \mathbb{N}_0$. This interval has length $1-2^{-i-1} - (1-2^{-i}) = 2^{-i-1}$, so we can write $x = 1-2^{-i} + \epsilon\cdot 2^{-i-1}$ for some $\epsilon \in (0, 1]$. We now calculate $f(x)$:
\begin{align*}
f(x)
&= 2 - 2^{-i} - 2^{-i-1} - x\\
&= 2 - 2^{-i} - 2^{-i-1} - (1-2^{-i} + \epsilon\cdot 2^{-i-1})\\
&= 1 - 2^{-i-1}(1+\epsilon).
\end{align*}
We conclude that $f(x) \in [1-2^{-i}, 1-2^{-i-1})$. We now use the definition of $\tilde{f}$, so if we calculate $\tilde{f}(f(x))$, we get

\begin{align*}
\tilde{f}(f(x))
&= 2 - 2^{-i} - 2^{-i-1} - f(x) \\
&= 2 - 2^{-i} - 2^{-i-1} - (2-2^{-i} - 2^{-i-1} - x) \\
&= x.
\end{align*}



We conclude that $f$ has an inverse. Using exactly the same reasoning, we get that $f(\tilde{f}(x)) = x$ for all $x \in [0,1]$. Therefore it's inverse exists and it has to be a bijection.



I know there are less cumbersome methods of proving this fact, but as of now this is the only thing I can come up with.



Answer



It seems fine to me, I think, although it's a complicated enough construction that I'm not totally convinced of my surety.



If you want an easier method, by the way: let $f: (0, 1] \to [0, 1]$ by the following construction. Order the rationals in $(0, 1]$ as $q_1, q_2, \dots$. Then define $f(x)$ by "if $x$ is irrational, let $f(x) = x$; otherwise, let $f(q_i) = q_{i-1}$, and let $f(q_1) = 0$". We basically select some countable subset, and prepend 0 to it. You can do this with any countable subset: it doesn't have to be, or be a subset of, $\mathbb{Q} \cap (0, 1]$. If you prefer, for instance, you could take $\frac{1}{n}$ as the $q_n$.


Thursday, September 29, 2016

calculus - Evaluate limit $lim_{x rightarrow 0}left (frac 1x- frac 1{sin x} right )$

Can someone provide me with some hint how to evaluate this limit? $$ \lim_{x \rightarrow 0}\left (\frac 1x- \frac 1{\sin x} \right ) $$ I tried l'hopital's rule but it didn't work.

soft question - Book recommendations for highschool algebra for concepts and hard problems



I am looking for a book recommendations for learning algebra for high school.




Usually my exams(national level competitions) may even ask some of the things that are a bit beyond the syllabus, and have really hard problems. So I want a book that goes over both theory in some detail and also contains hard problems and maybe a few tricks.



I am looking for a book in algebra that goes over topics like:



[This is the prescribed syllabus]




Algebra



Algebra of complex numbers, addition, multiplication, conjugation, polar representation, properties of modulus and principal argument, triangle inequality, cube roots of unity, geometric interpretations.




Quadratic equations with real coefficients, relations between roots and coefficients, formation of quadratic equations with given roots, symmetric functions of roots.



Arithmetic, geometric and harmonic progressions, arithmetic, geometric and harmonic means, sums of finite arithmetic and geometric progressions, infinite geometric series, sums of squares and cubes of the first n natural numbers.



Logarithms and their properties.



Permutations and combinations, binomial theorem for a positive integral index, properties of binomial coefficients





Please can someone help me ?? :)



Thank You


Answer



There exists the relics from the past which go from the very beginning to such topics as complex numbers in deep.
$$.$$
G Chrystal, an elementary texbook volume 1 and 2.$$.$$
Hall and Knight, elementary algebra volume 1 and 2. (though it's also named as "elementary algeba for schools")
$$.$$
For an advanced standpoint, you can read B.D Bunday and H. Mulholland "Pure mathematics for advanced level", though in my opinion you should read Chrystal's algebra

first for it is one of those books that never age and deals with many interesting topics (even series, interesting identities and so on) so you should do fine just with it, since the book is kinda old you can find it online.


analysis - Bijection from $mathbb{R} to mathbb{R} times mathbb{R}$?








I know it's possible to produce a bijection from $\mathbb{Z}$ to $\mathbb{Z}\times\mathbb{Z}$, but is it possible to do so from $\mathbb{R}$ to $\mathbb{R} \times \mathbb{R}$?

Wednesday, September 28, 2016

online resources - Overview of basic facts about Cauchy functional equation

The Cauchy functional equation asks about functions $f \colon \mathbb R \to \mathbb R$ such that
$$f(x+y)=f(x)+f(y).$$
It is a very well-known functional equation, which appears in various areas of mathematics ranging from exercises in freshman classes to constructing useful counterexamples for some advanced questions. Solutions of this equation are often called additive functions.



Also a few other equations related to this equation are often studied. (Equations which can be easily transformed to Cauchy functional equation or can be solved by using similar methods.)



Is there some overview of basic facts about Cauchy equation and related functional equations - preferably available online?

real analysis - T/F: a smooth function that grows faster than any linear function grows faster than $x^{1+epsilon}$


Prove or find a counterexample to the claim that a smooth function that grows faster than any linear function grows faster than $x^{1+\epsilon}$ for some $\epsilon>0$.


My attempt: I understand that the first part of the problem claims $\lim_{x\rightarrow \infty}\frac{g(x)}{kx} = \infty, \forall k>0$. We want to show, then, that $\exists \epsilon >0$ and constant $l>0$ such that $\lim_{x\rightarrow \infty}\frac{g(x)}{lx^{1+\epsilon}} = \infty$.


I've tried using the definition of limits, but I get stuck trying to bound the function $\frac{1}{x^\epsilon}$. Also, I've tried using L'Hopital's rule to no avail. Any ideas?


Any help is appreciated!


Answer




Hint: It is false. Find a counterexample.


Followup hint: (place your mouse on the hidden text to show it)



The function $f\colon(0,\infty)\to\mathbb{R}$ defined by $f(x) = x\ln x$ is such a counterexample.



Followup followup hint: (pretty much the solution, with some details to fill in; place your mouse on the hidden text to show it)



For any $a>0$, $\frac{x\ln x}{a x} = \frac{1}{a}\ln x \xrightarrow[x\to\infty]{} \infty$. However, for any fixed $\epsilon > 0$, $$\frac{x\ln x}{x^{1+\epsilon}} = \frac{\ln x}{x^\epsilon}=\frac{1}{\epsilon}\frac{\ln(x^\epsilon)}{x^\epsilon} = \frac{1}{\epsilon}\frac{\ln t}{t}$$ for $t=x^\epsilon \xrightarrow[x\to\infty]{}\infty$.



real analysis - Show that $limlimits_{n to infty} frac{(n!)^{1/n}}{n}= frac{1}{e}$





Show that $$\lim_{n \to \infty} \left\{\frac{(n!)^{1/n}}{n}\right\} = \frac{1}{e}$$





What I did is to let $U_n = \dfrac{(n!)^{\frac{1}{n}}}{n}$ and $U_{n+1} = \dfrac{(n+1)!^{\frac{1}{n+1}}}{n+1}$. Then



$$\frac{ U_{n+1} }{U_n } = \frac{\frac{(n+1)!^{\frac{1}{n+1}}}{n+1}}{\frac{(n!)^{\frac{1}{n}}}{n}}$$



Next I just got stuck. Am I on the right track, or am I wrong doing this type of sequence?


Answer



Let $v_n = \frac{n!}{n^n } $ then $$ \frac{v_{n+1}}{v_n } =\frac{(n+1)! }{(n+1)^{n+1}} \cdot \frac{n^n }{n!} =\frac{n^n}{(n+1)^n }=\frac{1}{\left(1+\frac{1}{n}\right)^n}\to \frac{1}{e}$$ hence $$\frac{\sqrt[n]{n!} }{n} =\sqrt[n]{v_n} \to\frac{1}{e} .$$


calculus - Continuity is required for differentiability?



My professor emphasized that:


  1. Differentiability implies continuity and

  2. Continuity is required for differentiability.

Since a function like $\frac 1 x$ is differentiable but not continuous, I thought my professor simply forgot to say that the 2 rules only apply at a point, not an interval.


However, in the textbook, we were given the following questions and the corresponding solutions:


  1. If $f$ is differentiable and $f(-1)=f(1),$ then there is a number $c$ such that $|c|<1$ and $f'(c)=0.$ (true)

My solution: consider $f=\frac 1 {x^2}$, therefore it is false.


  1. If $f'(x)$ exists and is nonzero for all $x,$ then $f(1)\neq f(0).$ (true)

My solution: consider $f=\frac 1 {(x-0.5)^2}$, therefore it is false.


The textbook's answer only makes sense if differentiability implies continuity on an interval. So does differentiability imply continuity on an interval or is the textbook wrong?



Answer



The functions $f(x) = 1/x$ and $f(x) = 1/x^2$ are not defined in $0$. So in particular it makes no sense to think about continuity or differentiability at $0$. Both your statement hold only on intervals.


Differentiability does not imply continuity on an interval! Consider the somewhat artificial functions defined as $0$ on the rationals and $x^2$ on the irrationals. It is continuous and differentiable at $0$ and neither continuous nor differentiable on $\mathbb{R} \setminus \{0\}$.


Edit: I think I misunderstood the "on an interval" part. Anyway, the implication is pointwise.


integration - Why do we treat differential notation as a fraction in u-substitution method


How did we come to know that treating the differential notation as a fraction will help us in finding the integral. And how do we know about its validity?
How can $\frac{dy}{dx}$ be treated as a fraction?
I want to know about how did u-substitution come about and why is the differential treated as a fraction in it?


Answer



It doesn't necessarily need to be.


Consider a simple equation $\frac{dy}{dx}=\sin(2x+5)$ and let $u=2x+5$. Then $$\frac{du}{dx}=2$$ Traditionally, you will complete the working by using $du=2\cdot dx$, but if we were to avoid this, you could instead continue with the integral: $$\int\frac{dy}{dx}dx=\int\sin(u)dx$$ $$\int\frac{dy}{dx}dx=\int\sin(u)\cdot\frac{du}{dx}\cdot\frac{1}{2}dx$$ $$\int\frac{dy}{dx}dx=\frac{1}{2}\int\sin(u)\cdot\frac{du}{dx}dx$$ $$y=c-\frac{1}{2}\cos(u)$$ $$y=c-\frac{1}{2}\cos(2x+5)$$


But why is this? Can we prove that the usefulness of the differentiatals' sepertation is justified? As Gerry Myerson has mentioned, it's a direct consequence of the chain rule:


$$\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$$ $$\int\frac{dy}{dx}dx=\int\frac{dy}{du}\frac{du}{dx}dx$$ But then if you 'cancel', it becomes $$\int\frac{dy}{dx}dx=\int\frac{dy}{du}du$$ Which is what you desired.


Tuesday, September 27, 2016

sequences and series - Compute $1 cdot frac {1}{2} + 2 cdot frac {1}{4} + 3 cdot frac {1}{8} + cdots + n cdot frac {1}{2^n} + cdots $




I have tried to compute the first few terms to try to find a pattern but I got



$$\frac{1}{2}+\frac{1}{2}+\frac{3}{8}+\frac{4}{16}+\frac{5}{32}+\frac{6}{64}$$



but I still don't see any obvious pattern(s). I also tried to look for a pattern in the question, but I cannot see any pattern (possibly because I'm overthinking it?) Please help me with this problem.


Answer



$$I=\frac{1}{2}+\frac{2}{4}+\frac{3}{8}+\frac{4}{16}+\frac{5}{32}+\frac{6}{64}+\cdots$$

$$2I=1+1+\frac{3}{4}+\frac{4}{8}+\frac{5}{16}+\frac{6}{32}+\cdots$$
$$2I-I=1+\left(1-\frac 12 \right)+\left(\frac 34 -\frac 24 \right)+\left(\frac 48 -\frac 38 \right)+\left(\frac {5}{16} -\frac {4}{16} \right)+\cdots$$
$$I=1+\frac 12+\frac 14+\frac 18+\cdots=2$$


calculus - Alternative way to prove $lim_{ntoinfty}frac{2^n}{n!}=0$?


It follows easily from the convergence of $\sum_{n=0}^\infty\frac{2^n}{n!}$ that $$ \lim_{n\to\infty}\frac{2^n}{n!}=0\tag{1} $$ Other than the Stirling's formula, are there any "easy" alternatives to show (1)?


Answer



Yes: note that $$ 0\leq \frac{2^n}{n!}\leq 2\Big(\frac{2}{3}\Big)^{n-2}$$ for $n\geq 3$, and then use the squeeze theorem.


probability distributions - Binomial-Poisson limit




I want to show that if $Z_n$ has the binomial distribution with parameters $n$ and $\lambda/n$ with $\lambda$ fixed, then $Z_n $ converges in distribution to the Poisson distribution, parameter $\lambda$ as $n\rightarrow \infty$. How do I do this using characteristic functions?



Edit: i think the characteristic function of the binomial distribution is $(pe^{it}+(1-p))^n$ and that of the Poisson is $e^{\lambda(e^{it}-1)}$, but i dont know which limit to take.


Answer



So the characteristic function of $\text{B}(n,\lambda/n)$ is
$$ ((1-\lambda/n)+\lambda/n e^{it})^{n} = \left( 1 + \frac{1}{n} \lambda\left( e^{it}-1 \right) \right)^n. $$
Now use that
$$ \lim_{n \to \infty} \left( 1 + \frac{x}{n} \right)^n = e^x. $$
Then the convergence and uniqueness theorems for characteristic functions imply that the distribution is $\text{Po}(\lambda)$.



Find a bijective function between two sets

I want to find a bijective function from $(\frac{1}{2},1]$ into $[0,1]$. So, What is a bijective function $f:(\frac{1}{2},1]\to[0,1]$?

Monday, September 26, 2016

algebra precalculus - Find the $n^{th}$ term and the sum to $n$ terms of the following series



Find the $n^{th}$ term and sum to $n$ terms of the following series.
$$1.3+2.4+3.5+……$$



My Attempt:




Here,
$n^{th}$ term of $1+2+3+……=n$



$n^{th}$ term of $3+4+5+……=n+2$



Thus,



$n^{th}$ term of the series $1.3+2.4+3.5+……=n(n+2)$



$$t_n=n^2+2n$$




If $S_n$ be the sum to $n$ terms of the series then
$$S_n=\sum t_n$$
$$=\sum (n^2+2n)$$



How do I proceed?


Answer



\begin{align}\sum_{i=1}^n (i^2+2i) &=\sum_{i=1}^n i^2 + 2\sum_{i=1}^n i\\
&= \frac{n(n+1)(2n+1)}{6}+2\cdot \frac{n(n+1)}{2} \end{align}




You might want to factorize the terms to simplify things.


elementary set theory - Show that $f^{-1}(A cup B) = f^{-1}(A) cup f^{-1}(B)$



Show that $f^{-1}(A\cup B) = f^{-1}(A)\cup f^{-1}(B)$ but not necessarily




$f^{-1}(A\cap B)=f^{-1}(A)\cap f^{-1}(B)$.



Let $S=A\cup B$



I know that $f^{-1}(S)=\{x:f(x)\in S\}$ assuming that that $f$ is one to one.
Is this true $\{x:f(x)\in S\}=\{x:f(x) \in A\}\cup\{x:f(x)\in B\}$?



Why doesn't the intersection work?



Sources : ♦ 2nd Ed, $\;$ P219 9.60(d), $\;$ Mathematical Proofs by Gary Chartrand,
♦ P214, $\;$ Theorem 12.4.#4, $\;$ Book of Proof by Richard Hammack,
♦ P257-258, $\;$ Theorem 5.4.2.#2(b), $\;$ How to Prove It by D Velleman.



Answer



Your exercise is incorrect.



$$\begin{align}f^{-1}[A\cap B] &:= \{x\in\text{dom}(f):f(x)\in A\cap B\}\\ &= \{x\in\text{dom}(f):f(x)\in A\text{ and }f(x)\in B\}\\ &= \{x\in\text{dom}(f):f(x)\in A\}\cap\{x\in\text{dom}(f):f(x)\in B\}\\ &=: f^{-1}[A]\cap f^{-1}[B].\end{align}$$



You'll proceed similarly to show that $f^{-1}[A\cup B] = f^{-1}[A]\cup f^{-1}[B],$ trading "and" for "or".






On the other hand, while we have $f[A\cup B]=f[A]\cup f[B]$ and $f[A\cap B]\subseteq f[A]\cap f[B],$ we don't generally have equality in the last case, unless $f$ is one-to-one. Pick any constant function on your personal favorite set of two or more elements, then choose two disjoint subsets $A$ and $B$ for an example where the inclusion is strict.



probability - Die roll and coin flip



Suppose I roll a 4-sided die, then flip a fair coin a number of times corresponding to the die roll. Given that i got three heads on the coin flip, what is the probability the die score was a 4?




I recognize that Bayes Formula can be used to solve this but I'm a little stuck on how to apply it. I figured that because the coin flip got three heads, it's impossible the die score is 2 or less, so the die score MUST be greater than 2. But I'm not sure if that really applies.


Answer




I recognize that Bayes Formula can be used to solve this but I'm a little stuck on how to apply it. I figured that because the coin flip got three heads, it's impossible the die score MUST be greater than 2. But I'm not sure if that really applies.




Right, given that you have three heads, you must have rolled 3 or 4 on the die, and flipped the coin so many times. Letting $X$ denote the die result and $Y$ count the heads.



$\mathsf P(X=4\mid Y=3) ~{~=~ \dfrac{\mathsf P(Y=3\mid X=4)\mathsf P(X=4)}{\mathsf P(Y=3\mid X=3)\mathsf P(X=3)+\mathsf P(Y=3\mid X=4)\mathsf P(X=4)}\\~=~ \dfrac{4\cdot 2^{-4}}{2^{-3}+4\cdot 2^{-4}}\\~=~\dfrac 23}$



linear algebra - Generalized eigenvalue problem of Hermitian matrix (exist complex eigenvalues)

Let $\mathbf{A}, \mathbf{B} \in \mathbb{C}^{n \times n}$ be Hermitian and invertible. Show that exists eigenvalues $\lambda$ which satisfy $\mathbf{Av} = \lambda \mathbf{Bv}$ are not real.



My solution and question:



From the property of Hermitian we know that all eigenvalues of Hermitian matrix are real. (Omit the proof)



Then $\forall x \in \mathbb{C}^n$ we have $x^HAx = x^HP^HDPx = (Px)^HD(Px) = y^HDy = \lambda_1|y_1|^2 + \cdots + \lambda_n |y_n|^2$ which is a real number, where $D$ is diagonal matrix and the second inequality is derived form eigendecomposition.



Back to the question, $Av = \lambda Bv \Rightarrow v^HAv = \lambda v^HBv$.




Since $\mathbf{A}$ and $\mathbf{B} $ are Hermitian, the quadratic form should real. Therefore, the $\lambda = \frac{v^HAv}{v^HBv}$ is real.



Is the original problem wrong? Could anyone help me out? Thanks in advance!

Sunday, September 25, 2016

elementary set theory - How is raising 2 to the power of a cardinality equivalent to taking the power set?



I've read that for a set of cardinality $\aleph_n$, the cardinality of the power set of that set is $2^{\aleph_n}$, and the cardinality of the power set of that power set is $2^{2^{\aleph_n}}$, and so on. I understand the concept of a power set as the set of all subsets of a set, but I don't see how this is equivalent to this exponent definition. So could someone explain it?


Answer




If $\lambda$ and $\kappa$ are cardinals, $\lambda^\kappa$ represents the cardinality of the set of functions $f\!:A\to B$ where $A,B$ are fixed sets of cardinality $\kappa,\lambda$ respectively. (One needs to check this is independent of which specific sets $A,B$ we pick, of course.)



At least for finite numbers, this is something you may have encountered in a discrete mathematics context. For example, $9=3^2$ and indeed there are 9 functions $f$ from $\{0,1\}$ (a set of size $2$) to $\{a,b,c\}$ (a set of size 3), since there are 3 choices for what $f(0)$ is, and independently of this, there are 3 choices for what $f(1)$ is.



In this manner, $2^\kappa$ is the size of the set of functions $f\!:A\to \{0,1\}$, where $A$ is your favorite set of size $\kappa$. But these functions are precisely the characteristic functions of subsets of $A$: Given such an $f$, you identify it with the set of $a\in A$ such that $f(a)=1$. This gives us a bijection between the set of functions from $A$ to $\{0,1\}$ and the power set of $A$.


real analysis - Show that the limit of functions is continuous


Let $f_n$ be a sequence of not necessarily continuous functions $\mathbb{R} \rightarrow \mathbb{R}$ such that $f_n(x_n) \rightarrow f(x)$ whenever $x_n \rightarrow x$. Show that f is continuous.


What I am trying to do is to show that whenever we have $x \in \mathbb{R}$ and $x_n \rightarrow x$, then $f(x_n) \rightarrow f(x)$, when $n \rightarrow \infty$.


These types of things are usually showed by using the triangle inequality. I know I can make $|f(x) - f_n(x_n)|$ as small as possible by choosing a big enough n. I can also make $|f(x) - f_n(x)|$ as small as possible. But I am not able to combine these to prove that $|f(x) - f(x_n)|$ can be made as small as possible.


Answer



First note that the hypothesis implies that the $f_n$ converge pointwise to $f$. To see this, consider the constant sequence $\langle x_n:n\in\Bbb N\rangle$ where $x_n=x$ for each $n\in\Bbb N$: $$\langle f_n(x):n\in\Bbb N\rangle=\langle f_n(x_n):n\in\Bbb N\rangle\to f(x)\;.$$


Now suppose that $f$ is not continuous at $x$, and let $\langle x_n:n\in\Bbb N\rangle\to x$ be such that $\langle f(x_n):n\in\Bbb N\rangle$ does not converge to $f(x)$. Then there is an $\epsilon>0$ such that $|f(x_n)-f(x)|\ge\epsilon$ for infinitely many $n\in\Bbb N$, so you can find a subsequence $\langle x_{n_k}:k\in\Bbb N\rangle$ such that $|f(x_{n_k})-f(x)|\ge\epsilon$ for every $k\in\Bbb N$. Since $\langle x_{n_k}:k\in\Bbb N\rangle\to x$, you might as well assume from the start that you have a sequence $\langle x_n:n\in\Bbb N\rangle$ and an $\epsilon>0$ such that $\langle x_n:n\in\Bbb N\rangle\to x$ and $|f(x_n)-f(x)|\ge\epsilon$ for all $n\in\Bbb N$.


By hypothesis $\langle f_n(x_n):n\in\Bbb N\rangle\to f(x)$. Choose $n_0\in\Bbb N$ so that $|f_n(x_0)-f(x_0)|<\epsilon/2$ for all $n\ge n_0$; we can do this, since the $f_n$’s converge pointwise to $f$. Now choose $n_1>n_0$ so that $|f_n(x_1)-f(x_1)|<\epsilon/2$ for all $n\ge n_1$. Continue in this way to construct an increasing sequence $\langle n_k:k\in\Bbb N\rangle$ such that $|f_n(x_k)-f(x_k)|<\epsilon/2$ for all $n\ge n_k$.


Now form a new sequence $\langle y_n:n\in\Bbb N\rangle$ as follows:


$$y_n=\begin{cases} x_0,&\text{if }n\le n_0\\ x_k,&\text{if }n_{k-1}

It’s not hard to see that $\langle y_n:n\in\Bbb N\rangle\to x$: it’s just the sequence $\langle x_n:n\in\Bbb N\rangle$ with each term repeated some finite number of times. Note that $y_{n_k}=x_k$ for every $k\in\Bbb N$. Thus, for each $k\in\Bbb N$ we have $f_{n_k}(y_{n_k})=f_{n_k}(x_k)$, which by the choice of $n_k$ implies that $|f_{n_k}(y_{n_k})-f(y_{n_k})|<\epsilon/2$.


Now $\langle f_n(y_n):n\in\Bbb N\rangle\to f(x)$, so $\langle f_{n_k}(y_{n_k}):k\in\Bbb N\rangle\to f(x)$, and there must be a $k\in\Bbb N$ such that $|f_{n_k}(y_{n_k})-f(x)|<\epsilon/2$. But then


$$|f(y_{n_k})-f(x)|\le|f(y_{n_k})-f_{n_k}(y_{n_k})|+|f_{n_k}(y_{n_k})-f(x)|<\frac{\epsilon}2+\frac{\epsilon}2=\epsilon\;,$$


which is a contradiction: $|f(y_{n_k})-f(x)|=|f(x_k)-f(x)|\ge\epsilon$.


Thus, $f$ must in fact be continuous at $x$.


sequences and series - Evaluating $sum_{n=1}^{infty}frac{1}{n(2n+1)(2n-1)}$


According to Wolfram Alpha, $$\sum_{n=1}^{\infty}\frac{1}{n(2n+1)(2n-1)}=\ln(4)-1$$ However, I am not sure how to evaluate this series.


Attempt $$\frac{1}{n(2n+1)(2n-1)}=\frac{A}{n}+\frac{B}{2n+1}+\frac{C}{2n-1}$$ $$1=A(2n+1)(2n-1)+Bn(2n-1)+Cn(2n+1)$$


Then, I got $$ \sum_{n=1}^{\infty}\frac{1}{n(2n+1)(2n-1)} = \sum_{n=1}^{\infty}\frac{-1}{n}+\frac{1}{2n+1}+\frac{1}{2n-1}\\ $$ I tried to view this as a telescoping series, but it did not turn out good. Can I have a hint?


Answer



$$ \sum_{n\geq 1} \left( \frac{1}{2n-1} - \frac{1}{2n}\right) - \sum_{n\geq 1} \left( \frac{1}{2n} - \frac{1}{2n+1}\right) = \ln 2 - (1-\ln 2) $$


real analysis - Is a function globally Lipschitz continuous and $mathcal{C}^1$ if and only if it is $mathcal{C}^1$ and its total derivative is bounded?




Is $f:\mathbb{R}^n\rightarrow\mathbb{R}^n$ globally Lipschitz continuous, i.e. there exists an $L>0$ such that



$\frac{|f(x)-f(y)|}{|x-y|}\leq L$ for all $x,y\in\mathbb{R}^n$,



and $\mathcal{C}^1$ if and only if $f$ is $\mathcal{C}^1$ and its total derivative is bounded?



Based on intuition alone, I'm strongly inclined to believe that the answer is yes. However I'm having trouble coming up with a proof (probably because my grasp of multivariable calculus is far from great). Could someone give one if the statement is true, or provide a counter example if it is false?



Thanks.



Answer



If $f$ is $\mathscr{C}^1$, then $f(x) - f(y) = \int_0^1 Df(y + t(x-y)).(x-y) dt$, by the fundamental theorem of calculus.



Hence, $$\begin{aligned} \| f(x) - f(y) \| &\le& &\int_0^1 \|Df(y+t(x-y)).(x-y) \| dt& \\ &\le& &\left( \int_0^1 \| Df( y + t(x-y) )\| dt \right) \| x-y \|& \le \sup_{z \in \mathbb{R}^n} \, \|Df(z) \| \; \| x-y \| \end{aligned}$$



If $\sup_{z \in \mathbb{R}^n} \, \| Df(z) \| = C$ is finite, we get $\| f(x) - f(y) \| \le C \|x - y \|$ for all $x,y$.



Reciprocally, suppose that your function is $\mathscr{C}^1$ and that it's globally Lipschitz, with constant $C$.



Then, for all $x \in \mathbb{R}^n$, and all $h \in \mathbb{R}^n$, we know that $$Df(x).h = \lim_{t \to 0} \frac{f(x+th) - f(x)}{t}$$




But, by assumption, $\| f(x +th) - f(x) \| \le C \|th \| = C |t| \|h\|$, and we finally get $\|Df(x).h \| \le C \|h \|$ for all $h$, which by definition implies $\| Df(x) \| \le C$. Hence the total derivative is bounded all over $\mathbb{R}^n$.



All of this works also on an open set of $\mathbb{R}^n$, instead of the whole space.



Remark also that you don't need to assume $f$ to be $\mathscr{C}^1$, but only differentiable. The second part of my proof works as well, and for the first part, instead of applying fundamental theorem of calculus, you can use the mean value theorem.


abstract algebra - How the following multiplication table is solved ( related to $F_2[X]/f(x)$ )





$F_2$ is polynomial field of group of integer modulo $2.f(x)$ is $x^2 + x + 1$.
enter image description here



I didn't got how the multiplication is happening in the table.I referred to many sources related to this topic but still i am facing difficulty in understanding it.I will be very thankful if someone explains the concept behind it.


Answer



This is the multiplication table for the field ${\Bbb F}_4 = {\Bbb F}_2[x]/\langle x^2+x+1\rangle$ consisting of the residue classes of the elements $0,1,x,x+1$ which are the remainders of ${\Bbb F}_2[x]$ modulo $x^2+x+1$.




For instance, $[x] \cdot [x+1] = [x\cdot(x+1)] = [x^2+x]$ and the residue class of $x^2+x$ modulo $x^2+x+1$ is $[1]$, i.e., $x^2+x = 1\cdot (x^2+x+1) + 1$ with quotient $q(x)=1$ and remainder $r(x)=1$. This is an elementary way to view this field extension.


algorithms - Intuitive explanation of Bailey-Borwein-Plouffe $pi$ extraction formula?

First, I should clarify, I'm not a mathematician. I don't study maths at college, so my knowledge of maths is sketchy at best.



I've been looking with interest at the Bailey-Borwein-Plouffe formula for calculating the nth digit of $\pi$, and I've been trying to work out how to code this in Visual Basic, with little success. I've been looking everywhere to try and understand the formula, but no one seems to provide simple or intuitive explanations - it seems rather niche, I guess. I'm also a little confused at the nature of the formula. I understand that it's a spigot algorithm, which apparently either calculates a sequence of decimals or extracts an nth-digit, but the Wikipedia page is confusing me, it seems to be describing both kinds of algorithm and I'm not sure what the direct formula does.




Would anyone be willing to try and explain the formula with a worked example, for example if $n = 4$? And am I right in thinking that using this formula correctly with $n$ as $4$ would return $5$, the fourth decimal digit?



I appreciate any help! The Wikipedia page is here

Calculating integral without L'hopital's rule




Problem$$\lim_{x \to 0}\frac{1}{x} \int_0^{x}(1+u)^{\frac{1}{u}}du=?$$





My first solution is using L'hopital's rule




Solution using L'hopital's rule, given limit
$$\lim_{x \to 0}\frac{(1+x)^{\frac{1}{x}}}{1}=e$$




I want another solution without using l'hopital's rule, but I can't found it. More annoying thing is I can't even integrate $$\int_0^{x}(1+u)^{\frac{1}{u}}du$$. Or It can't be solved without l'hopital's rule?



Answer



$(1+u)^{1/u}=e^{\frac 1 u \log(1+u) }\leq e$. Also $\log (1+u)=u+o(u)$ so, given $\epsilon >0$ there exists $\delta$ such that $\log(1+u) > (1-\epsilon) u$ for $0. Hence $e^{(1-\epsilon)} \leq(1+u)^{1/u} \leq e$ for $0 provided $0. Squeeze theorem completes the proof since $\epsilon$ is arbitrary.


Saturday, September 24, 2016

integration - show that $int_{-infty}^{+infty} frac{dx}{(x^2+1)^{n+1}}=frac {(2n)!pi}{2^{2n}(n!)^2}$




show that:



$$\int_{-\infty}^{+\infty} \frac{dx}{(x^2+1)^{n+1}}=\frac {(2n)!\pi}{2^{2n}(n!)^2}$$



where $n=0,1,2,3,\ldots$.



is there any help?



thanks for all


Answer




Write $${\vartheta _n} = \int_{ - \infty }^{ + \infty } {\frac{1}{{{{\left( {1 + {x^2}} \right)}^n}}}} \frac{{dx}}{{1 + {x^2}}}$$



Put $x=\tan\vartheta$. Then $${\vartheta _n} = \int_{ - \frac{\pi }{2}}^{\frac{\pi }{2}} {{{\cos }^{2n}}\vartheta } d\vartheta $$



so $${\vartheta _n} = 2\int_0^{\frac{\pi }{2}} {{{\cos }^{2n}}\vartheta } d\vartheta $$



We can come up with a recursion for $\vartheta_n$ using integration by parts, namely $${\vartheta _n} = \frac{{2n - 1}}{{2n}}{\vartheta _{n - 1}}$$



This means that $$\prod\limits_{k = 1}^n {\frac{{{\vartheta _k}}}{{{\vartheta _{k - 1}}}}} = \prod\limits_{k = 1}^n {\frac{{2k - 1}}{{2k}}} $$




so by telescopy $$\frac{{{\vartheta _n}}}{{{\vartheta _0}}} = \prod\limits_{k = 1}^n {\frac{{2k - 1}}{{2k}}} $$ but ${\vartheta _0} = \pi $ so $$\begin{align}
{\vartheta _n} &= \pi \prod\limits_{k = 1}^n {\frac{{2k - 1}}{{2k}}} \cr
&= \pi \prod\limits_{k = 1}^n {\frac{{2k - 1}}{{2k}}} \frac{{2k}}{{2k}} \cr
&= \pi \frac{{\left( {2n} \right)!}}{{{2^{2n}}n{!^2}}}=\frac{\pi}{4^n}\binom{2n}{n} \end{align} $$



as desired.


algebra precalculus - Find a limit without using L'Hopitals rule 9

Can someone please show me how to do this without using L'Hopitals rule:



$$\lim_{x \to \infty} \left(1 + \frac{a}{x}\right)^x$$



I know the limit is $e^a$, but I would like to know the steps taken to get to that answer.



thank you!

Friday, September 23, 2016

exponentiation - Negative Number raised to fractional power

How would you solve a negative number raised to a fraction a/b if b is odd and a is evem?
Ignoring imaginary numbers



i.e $(-1)^\frac23$ Calculator returns an error



$(-1)^\frac 13 (-1)^\frac 13$ = -1.-1 = 1 (By law of indices)



or




$(-1^\frac13 )^2$ = 1



or



$(-1^2)^\frac13$ = 1



What about for other cases of a and b?

calculus - Deconstructing $0^0$





It is well known that $0^0$ is an indeterminate form. One way to see that is noticing that


$$\lim_{x\to0^+}\;0^x = 0\quad,$$


yet,


$$\lim_{x\to0}\;x^0 = 1\quad.$$


What if we make both terms go to $0$, that is, how much is


$$L = \lim_{x\to0^+}\;x^x\quad?$$


By taking $x\in \langle 1/k\rangle_{k\in\mathbb{N*}}\,$, I concluded that it equals $\lim_{x\to\infty}\;x^{-1/x}$, but that's not helpful.



Answer



This is, unfortunately, not very exciting. Rewrite $x^x$ as $e^{x\log x}$ and take that limit. One l'Hôpital later, you get 1.


real analysis - How to Prove something is differentiable


I know how differentiability is defined in terms of continuity of the function $(f(x) - f(x_0)) / (x - x_0) $ as $x$ goes to $x_0$, but I was wondering if there are other useful theorems / lemmas I could use to show a function is differentiable?



Note: I am aware of the technique that if I can express my function in terms of a sum/product/quotient of functions that I know are differentiable, then I can just use the product rule, etc. to find the derivatives on top of showing that the function is differentiable.


But are there other lemmas or theorems that are also helpful? (For example, an equivalent definition of continuity is that preimages of open sets are open)


Answer



There are several theorems that you did not mention:


  • if $f$ and $g$ are differentiable, then $g\circ f$ is differentiable too (and $(g\circ f)'=(g'\circ f)\times f'$);

  • if $f$ is invertible and $f'$ is never $0$, then $f^{-1}$ is differentiable too (and $(f^{-1})'=\frac1{f'\circ f^{-1}}$);

  • if $(f_n)_{n\in\mathbb N}$ is a sequence of differentiable functions wich converges pointwise to a function $f$ and if the sequence $(f_n')_{n\in\mathbb N}$ converges uniformly to a function $g$, then $f$ is differentiable (and $f'=g$);

  • if $f$ is continuous and $F(x)=\int_a^xf(t)\,\mathrm dt$, then $f$ is differentiable (and $F'=f$).

modules - Dimension of the rationals over the integers



What is the dimension of $\mathbb Q$ when it is seen as a module over the integers $\mathbb Z$ (with the usual definitions of addition and multiplication)?



Initially I thought that the dimension ought to be 2, because each rational is uniquely defined by a pair of integers. But then I started looking for a base of size 2, and saw that it doesn't work:




Suppose we have the following base: $\{{p_1 \over q_1},{p_2 \over q_2}\}$. Then for every integers $n_1$, $n_2$:



$$ n_1 {p_1 \over q_1} + n_2 {p_2 \over q_2} = \frac{n_1 p_1 q_2 + n_2 p_2 q_1}{q_1 q_2}$$



It is obvious that a rational number with a denominator of $q_1 q_2 + 1$ cannot be represented as such a linear combination. Therefore the given set is not a base.



By a similar argument, no finite set can be a base.



On the other hand, the following countable set is a base:




$$ \{ {1 \over 1}, {1 \over 2}, {1 \over 3}, ... \} $$



so the dimension of $Q$ over $Z$ is $א_0 $.



Is my conclusion correct?


Answer



No set of more than one rational number is independent, so there is no basis, the rationals are not a free module over the integers.



However, your argument that there is no finite generating set is correct.


probability - Quotient Distribution of Positive Independent Random Variables




Suppose $X$ and $Y$ are independent positive random variables with probability density functions $f_X$ and $f_Y$ respectively. Show that $Z=X/Y$ is absolutely continuous and find its probability density function.




$\textbf{My Thought:}$ In order to show that $Z$ is absolutely continuous, I need to show that it has a commulative distribution function. Then I can also differentiate the latter to obtain the probability density function of $Z$. I have the following calculation
\begin{align*}
P(X/Y\leq z)&=P(X\ge zY,Y<0)+P(X\le zY,Y>0)\\

&=\int_{-\infty}^{0}\left(\int_{yz}^{\infty}f_{X}(x)dx\right)f_{Y}(y)dy+\int_{0}^{\infty}\left(\int_{-\infty}^{yz}f_{X}(x)dx\right)f_{Y}(y)dy\\
&=\int_{0}^{\infty}\left(\int_{0}^{yz}f_{X}(x)dx\right)f_{Y}(y)dy.
\end{align*}

Also, if $z<0$, then we have that $F_Z(z)=0,$ since $X,Y$ are positive random variables. Therefore the random variable $Z$ is continuous, and we can differentiate it to obtain the probability distribution function of $Z$. We have that $f_Z(z)=0$ if $z\leq 0,$ and otherwise we have
$$f_Z(z)=\int_{0}^{\infty}yf_{X}(yz)f_Y(y)dy.$$






Is my reasoning above correct?




Any feedback is much appreciated.



Thank you for your time.


Answer



It might be that the PDF cannot be obtained as derivative of the CDF, simply because the CDF is not necessarily differentiable (everywhere).



Fortunately that is not fatal here because there is a more direct route.



For a fixed positive $z$ we find:




$$P\left(Z\leq z\right)=P\left(X\leq zY\right)=\int_{0}^{\infty}\int_{0}^{zy}f_{X}\left(x\right)f_{Y}\left(y\right)dxdy=\int_{0}^{\infty}\int_{0}^{z}f_{X}\left(uy\right)yf_{Y}\left(y\right)dudy=$$$$\int_{0}^{z}\int_{0}^{\infty}f_{X}\left(uy\right)yf_{Y}\left(y\right)dydu$$



where the third equality rests on the substitution $x=uy$.



This shows directly that functiont $f_{Z}$ prescribed by: $$z\mapsto\int_{0}^{\infty}f_{X}\left(zy\right)yf_{Y}\left(y\right)dy$$ if $z>0$ and $z\mapsto 0$ otherwise serves as PDF of positive random variable $Z$.


Thursday, September 22, 2016

summation - Proof by induction: showing that two sums are equal




usually the tasks look like



$$\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6}$$



or



$$\sum_{i=0}^n i^2 = i_1^2 + i_2^2 + i_3^2+...+i_n^2$$




But for the following task I have this form:



$$\left(\sum_{k=1}^n k\right)^2 = \sum_{k=1}^nk^3 $$



First I am a little confused by how I approach this. Do I transform them into a complete term like in the first example? Or can I do it by just using the sums themselves? And how should I treat the square of the sum best?



The first step is pretty straight forward creating the base case. But as soon as I get into doing the "Induction Step" I am getting stuck and not sure how to proceed.



I am looking to know best practice for this case.




Edit:
This question is a little different, since it is expected to proove this only by using complete induction using the sum notation.


Answer



Assume that $\displaystyle\left(\sum_{k=1}^n k\right)^2 = \sum_{k=1}^nk^3$ holds for $n.$ We want to show that $\displaystyle\left(\sum_{k=1}^{n+1} k\right)^2 = \sum_{k=1}^{n+1}k^3.$ How to do it? Note that



$$\begin{align}\left(\sum_{k=1}^{n+1} k\right)^2&=\left(\sum_{k=1}^{n} k+n+1\right)^2\\&= \color{blue}{\left(\sum_{k=1}^{n} k\right)^2}+2(n+1)\sum_{k=1}^nk+(n+1)^2\\&\underbrace{=}_{\rm{induction}\:\rm{hypothesis}}\color{blue}{\sum_{k=1}^nk^3}+\color{red}{2(n+1)\sum_{k=1}^nk+(n+1)^2}\\&=\sum_{k=1}^{n+1}k^3\end{align}$$ if and only if $\displaystyle(n+1)^3=2(n+1)\sum_{k=1}^nk+(n+1)^2.$ Show this equality and you are done.


algebra precalculus - Is it possible to shorten the solution for this 2014 RMO question?


I was solving a question from the Regional Math Olympiad (RMO) 2014.



Find all positive real numbers $x,y,z$ such that



$$2x-2y+\frac1z=\frac1{2014},\quad2y-2z+\frac1x=\frac1{2014},\quad2z-2x+\frac1y=\frac1{2014}$$




Here's my solution:


These expressions are cyclic. Therefore all solution sets must be unordered. This implies that $x=y=z$.


Thus, $x=2014$ and the solution is


$$x=2014\quad y=2014\quad z=2014$$



Here's the official solution:


Adding the three equations, we get $$\frac1x+\frac1y+\frac1z=\frac3{2014}$$


We can also write them as $$2xz-2yz+1=\frac z{2014},\quad2xy-2xz+1=\frac x{2014},\quad2yz-2xy+1=\frac y{2014}$$


Adding these, we get $$x+y+z=3\times2014$$



Therefore, $$\left(\frac1x+\frac1y+\frac1z\right)(x+y+z)=9$$


Using $\text{AM-GM}$ inequality, we therefore obtain $$9=\left(\frac1x+\frac1y+\frac1z\right)(x+y+z)\ge9\times(xyz)^{\frac13}\left({1\over xyz}\right)^{\frac13}=9$$


Hence equality holds and we conclude that $x=y=z$.


Thus we conclude $$x=2014\quad y=2014\quad z=2014$$



What I wonder is if there is something wrong with my approach. If yes, what is it? If no, then why is the official solution so long winded?


Answer



Consider the system of equations


$xy + z = 1, \quad yz + x = 1, \quad zx + y = 1$


These equations are related by cyclic permutations of $(x,y,z)$, but they are satisified by $(1,1,0)$ (and its cyclic permutations) when $x$, $y$ and $z$ are not all equal.


There are also solutions where $x=y=z=\frac{\pm \sqrt{5}-1}{2}$, but these are not the only solutions.



real analysis - convergence in measure implies the composition of the sequence of functions and a continuous function also converges in measure


Let $D$ be a measureble set in $\mathbb{R}^n$. Suppose $\mu(D)<\infty$. Let $\phi: D\times \mathbb{R}\to \mathbb{R}$ be a continuous function such that for almost every $x\in D$, $\phi_x(t)=\phi(x,t)$ is a continuous function of $t$, and for almost every $t\in\mathbb{R}$, $\phi_t(x)=\phi(x,t)$ is a measurable function of $x$. Let $\{f_n\}$ be a sequence of measurable functions on $D$ such that $\{f_n\}$ converges to $f$ in measure. Prove that $g_n(x)=\phi(x,f_n(x))$ converges to $g(x)=\phi(x,f(x))$ in measure.


I am quite confused and do not know how to solve......


Answer



The result will hold if we manage to show that if $n_k\uparrow\infty$, then we can extract from $(g_{n_k})_{k\geqslant 1}$ a subsequence $(g_{m_k})_{k\geqslant 1}$ such that $g_{m_k}\to g$ for almost every $x$.


To do that, we extract from $f_{n_k}$ a subsequence which converges almost everywhere to $f$.


analysis - Show that if $f(1)=1$, then there exists a constant $alpha$ such that $f(x)=x^alpha$ for all $x in (0, +infty)$.



Let $f: (0, +\infty) \to\mathbb R$ be a differentiable function such that $f(xy)=f(x)f(y)$ for all $x,y \in (0, +\infty)$.




Show that if $f(1)=1$, then there exists a constant $\alpha$ such that $f(x)=x^\alpha$ for all $x \in (0, +\infty)$.



So far:
$$f(x^\alpha)=f(\underbrace{x\cdot x\cdots x}_{\alpha\text{ times}})=\underbrace{f(x)\cdot f(x)\cdots f(x)}_{\alpha\text{ times}}=\alpha\cdot f(x)$$
Hence, $f(x^\alpha)=\alpha\cdot f(x)$.



From here I am not quite sure, but I believe I need to differentiate both sides, then move all terms to one side equaling zero and solve?


Answer



First note that $f(y) > 0$(Why?). Now let $g(x) = \ln(f(a^x))$, where $a>0$. We then have
$$g(x+y) = \ln(f(a^{x+y})) = \ln(f(a^xa^y)) = \ln(f(a^x) f(a^y)) = g(x) + g(y)$$

This is the Cauchy function equation and if $g(x)$ is continuous, the solution is
$$g(x) = cx$$
Hence, we have
$$f(a^x) = e^{cx} \implies f(x) = x^t$$
Note that in the above proof, we only relied on the continuity of $f$.



Below are some similar problems based on Cauchy function equation:



Is there a name for function with the exponential property $f(x+y)=f(x) \cdot f(y)$?




Classifying Functions of the form $f(x+y)=f(x)f(y)$



If $f\colon \mathbb{R} \to \mathbb{R}$ is such that $f (x + y) = f (x) f (y)$ and continuous at $0$, then continuous everywhere



continuous functions on $\mathbb R$ such that $g(x+y)=g(x)g(y)$



What can we say about functions satisfying $f(a + b) = f(a)f(b) $ for all $a,b\in \mathbb{R}$?


probability - Expectetion of $Y^{alpha}$ with $alpha >0$


Let $Y$ be a positive random variable. For $\alpha>0$ show that


$E(Y^{\alpha})=\alpha \int_{0}^{\infty}t^{\alpha -1}P(Y>t)dt$.


My ideas:


$E(Y^{\alpha})= \int_{-\infty}^{\infty}t^{\alpha}f_{Y}(t)dt$


=$\int_{0}^{\infty}t^{\alpha}f_{Y}(t)dt$


=$\int_{0}^{\infty}(\int_{0}^{t^{\alpha}}dy)f_{Y}(t)dt$


Answer



$E(Y^\alpha)=\int_0^\infty t^\alpha f_y(t)dt$. Let $G_y(t)=P(Y\gt t)=1-F_y(t)$. Therefore $G'_y(t)=-f_y(t)$. Integrate $E(Y^\alpha)$ by parts and get $E(Y^\alpha)=-t^\alpha G_y(t)\rbrack_0^\infty +\alpha \int_0^\infty t^{\alpha-1}G_y(t)dt={\alpha \int_0^\infty t^{\alpha-1}P(Y\gt t)dt}$.


divisibility - Divisible by 19 Induction Proof

Prove by induction that for all natural numbers $n$, $\frac{5}{4}8^n + 3^{3n-1}$ is divisible by $19$.


I'm running into trouble at the inductive part of the step, I am currently attempting to add/subtract the inductive hypothesis but I end up with two different coefficients that are seemingly unrelated to $19$. I've been stuck on this for days, thanks for the help!

algebra precalculus - In $Delta ABC$ if $a,b,c$ are in Harmonic Progression

In $\Delta ABC$ if $a,b,c$ are in Harmonic Progression Then Prove that


$$\sin ^2\left(\frac{A}{2}\right),\sin ^2\left(\frac{B}{2}\right),\sin ^2\left(\frac{C}{2}\right)$$ are in Harmonic Progression


My Try:


we have


$$\frac{1}{b}-\frac{1}{a}=\frac{1}{c}-\frac{1}{b}$$ Then


$$\frac{a-b}{a}=\frac{b-c}{c}$$ and by Sine Rule


$$ \frac{\sin A-\sin B}{\sin A}=\frac {\sin B-\sin C}{\sin C}$$ $\implies$


$$\frac{2\sin \left(\frac{C}{2}\right)\cos \left(\frac{A-B}{2}\right)}{2\sin \left(\frac{A}{2}\right)\cos \left(\frac{A}{2}\right)}=\frac{2\sin \left(\frac{A}{2}\right)\cos \left(\frac{B-C}{2}\right)}{2\sin \left(\frac{C}{2}\right)\cos \left(\frac{C}{2}\right)}$$ $\implies$



$$\sin ^2\left(\frac{C}{2}\right)\left(2\cos \left(\frac{C}{2}\right)\cos \left(\frac{A-B}{2}\right)\right)=\sin ^2\left(\frac{A}{2}\right)\left(2\cos \left(\frac{A}{2}\right)\cos \left(\frac{B-C}{2}\right)\right)$$ $\implies$


$$\sin ^2\left(\frac{C}{2}\right) \left(\sin B+\sin A\right)=\sin ^2\left(\frac{A}{2}\right) \left(\sin B+\sin C\right)$$


Any way to proceed?

Wednesday, September 21, 2016

real analysis - continuous functions on $mathbb R$ such that $g(x+y)=g(x)g(y)$





Let $g$ be a function on $\mathbb R$ to $\mathbb R$ which is not identically zero and which satisfies the equation $g(x+y)=g(x)g(y)$ for $x$,$y$ in $\mathbb R$.



$g(0)=1$. If $a=g(1)$,then $a>0$ and $g(r)>a^r$ for all $r$ in $\mathbb Q$.



Show that the function is strictly increasing if $g(1)$ is greater than $1$, constant if $g(1)$ is equal to $1$ or strictly decreasing if $g(1)$ is between zero and one, when $g$ is continuous.


Answer




For $x,y\in\mathbb{R}$ and $m,n\in\mathbb{Z}$,
$$
\eqalign{
g(x+y)=g(x)\,g(y)
&\implies
g(x-y)={g(x) \over g(y)}
\\&\implies
g(nx)=g(x)^n
\\&\implies
g\left(\frac{m}nx\right)=g(x)^{m/n}

}
$$
so that $g(0)=g(0)^2$ must be one (since if it were zero, then $g$ would be identically zero on $\mathbb{R}$), and with $a=g(1)$, it follows that $g(r)=a^r$ for all $r\in\mathbb{Q}$. All we need to do now is invoke the continuity of $g$ and the denseness of $\mathbb{Q}$ in $\mathbb{R}$ to finish.



For example, given any $x\in\mathbb{R}\setminus\mathbb{Q}$, there exists a sequence $\{x_n\}$ in $\mathbb{Q}$ with $x_n\to x$ (you could e.g. take $x_n=10^{-n}\lfloor 10^nx\rfloor$ to be the approximation of $x$ to $n$ decimal places -- this is where we're using that $\mathbb{Q}$ is dense in $\mathbb{R}$). Since $g$ is continuous, $y_n=g(x_n)\to y=g(x)$. But $y_n=a^{x_n}\to a^x$ since $a\mapsto a^x$ is also continuous.



Moral: a continuous function is completely determined by its values on any dense subset of the domain.


calculus - $dx=frac {dx}{dt}dt $. Why is this equality true and what does it mean?

$dx=\frac {dx}{dt}dt $. I know that this deduction is obvious from the chain rule, given that we treat our dx and dt as just numbers. But I find it quite unsatisfactory to think of it in that sense. Is there a better / more "calculus-inclined" way of thinking about this equality. Can you please explain both the LHS and RHS individually.

radicals - Prove that if $n$ is a positive integer then $sqrt{n}+ sqrt{2}$ is irrational



Prove that if $n$ is a positive integer then $\sqrt{n}+ \sqrt{2}$ is irrational.



The sum of a rational and irrational number is always irrational, that much I know - thus, if $n$ is a perfect square, we are finished.
However, is it not possible that the sum of two irrational numbers be rational? If not, how would I prove this?



This is a homework question in my proofs course.


Answer



Multiply both sides by $\sqrt n - \sqrt 2$. Then $n - 2 = \frac{p}{q} ( \sqrt n - \sqrt 2 )$ so $\sqrt n - \sqrt 2$ is also rational. So we have two rational numbers whose difference (which must be rational) is $2 \sqrt 2$, meaning that $\sqrt 2$ is rational.


real analysis - bijective measurable map existence


Does there exist bijective measurable maps between $\mathbb{R}$ and $\mathbb{R}^n$?


If so, could you give me an example of that?


Thank you.


Answer



Polish space is a topological space that is isomorphic to a complete separable metric space, for example $\Bbb R^n$ for any $n\in \Bbb N$. For the proof of the following fact, see e.g. here.



Any uncountable Polish space is Borel isomorphic (there exists a bimeasurable bijection) to the space of real numbers $\Bbb R$ with standard topology.




Tuesday, September 20, 2016

linear algebra - Prove if matrix has right inverse then also has left inverse.



I tried to prove that if $A$ and $B$ are both $n\times n$ matrices and if $AB = I_n$ then $BA = I_n$ (i.e. the matrix $A$ is invertible). So first I managed to conclude that if exists both $B$ and $C$ such that $AB = I_n$ and $CA = I_n$, then trivially $B=C$ . However to conclude the proof we need to show that if such a right inverse exists, then a left inverse must exist too.



No idea how to proceed. All I can use is definition of matrices, and matrix multiplication, sum , transpose and rank.




(I saw proof of this in other questions, but they used things like determinants or vectorial spaces, but I need a proof without that).


Answer



A matrix $A\in M_n(\mathbb{F})$ has a right inverse $B$ (which means $AB=I$) if and only if it has rank $n$. I assume you know that. So now you need to prove that $BA=I$. Well, let's multiply the equation $AB=I$ by $A$ from the right side. We get $A(BA)=A$ and hence $A(BA-I)=0$. Well, now we can split the matrix $BA-I$ into columns. Let's call its columns $v_1,v_2,...,v_n$ and so this way we get $Av_1=0,Av_2=0,...,Av_n=0$. But because the rank of $A$ is $n$ we know that the system $Ax=0$ can have only the trivial solution. Hence $v_1=v_2=...=v_n=0$, so $BA-I$ is the zero matrix and hence $BA=I$.


calculus - Logarithm defined using the definite integral without Fundamental Theorem

If one wants to define the natural logarithm using (Riemann) integrals, he could do as follows:



$$ \log(x) := \int_{1}^{x} \frac{1}{t} dt $$




Let's assume we hadn't defined the $\exp$-function yet. How can one prove some basic logarithmic identities WITHOUT using the Fundamental Theorem of Calculus.




  1. $\log (xy) = \log (x) + \log (y)$

  2. $\log '(1) = 1$



Edit:
This exercise came up as an homework for a Calculus I class at my university. They explicitly stated not to use the exp-function or the FTC. Since I didn't know how they were supposed to solve this with these restrictions, I thought I post this problem here. (Note: This homework was due a few weeks ago. So posting the solution here shouldn't be a problem.)

trigonometry - Sine rule and equal angles

Is it true that if a triangle on a unit sphere has 2 sides with equal length then their opposit angles must be equal? I think it is true. I think we can use the spherical sine law. Call the sides with equal lengths $a,b$ and their opposite angles $\alpha,\beta$. Then since $a=b$, $\sin\alpha=\sin\beta$. How do I then say for certain that $\alpha=\beta$? I know that the angles must be $\in (0,\pi)$ (right?). But how can I exclude the possibility of one angle being $\pi-$ the other angle?

linear algebra - Eigenvalues of a multinomial covariance matrix



The following matrix shows up in studying multinomial distributions as the covariance matrix. Let $p$ be a column vector of dimension $k$ with $p_i\geq 0, \sum_{i=1}^k p_i = 1.$ Let



$$A:=\mathrm{Diag}(p) - pp^T,$$



where $\mathrm{Diag}(p)$ is a diagonal matrix with $p$ on the diagonal.




$A$ is a positive semidefinite matrix. One of its eigenvalues is zero (corresponding to the all-ones eigenvector). What are its other eigenvalues as a function of $p$? Is there a closed-form expression for those eigenvalues?


Answer



In general no closed form. See this paper:



https://projecteuclid.org/download/pdfview_1/euclid.bjps/1405603508


Monday, September 19, 2016

trigonometry - Sum of powers of primitive root of unity- Trig Proof



I'm trying to prove that if $z=\operatorname{cis}(2\pi/n) = \cos(2\pi/n) + i\sin(2\pi/n)$, that is, $z$ is a primitive $n$-th root of unity, for any integer $n\geq 2$, $1+z+z^2+\cdots+z^{n-1}=0$. I've already come across a nice and concise proof here, and that same link also has a comment pointing out that it's just a geometric sum which can be expressed as $\dfrac{1-\operatorname{cis}^n(2\pi/n)}{1-\operatorname{cis}(2\pi/n)}$ which is just $0$ in the numerator. However, I was wondering if I could do it just using trig functions. It's an inefficient way of proving it, but I was fixated on this approach for so long I was wondering if someone knew how to do it.



Proving that the imaginary part is $0$ is easy- you just use the identity $\sin(a)+\sin(b)=2\sin(\frac{a+b}{2})\sin(\frac{a-b}{2})$ and for each integer $j$ where $0< j


This same approach doesn't work for the real part- using the identity $\cos(a)+ \cos(b) =2\cos(\frac{a+b}{2})\cos(\frac{a-b}{2})$, and adding the same pairs gets $2\cos(2\pi)\cos(2\pi(n-2j)/n)=2\cos(2\pi(n-2j)/n)$ so this gets $1+2\sum_{j=1}^{\lfloor n/2 \rfloor}\cos(2\pi(n-2j)/n)$ with $\cos(\pi/n)=-1$ added if $n$ is even. Then I need to show that that sum is $0$ if $n$ is even and $-1/2$ if $n$ is odd. Is there a clean way of doing this? The only thing I can think to do is repeat the sum of $\cos$ identity, and that doesn't seem too helpful.


Answer



Use the identity $$\displaystyle\sum\limits_{m=0}^{n-1} \cos(mx+y)=\frac{\cos\left(\dfrac{n-1}{2}x+y\right)\sin\left(\dfrac{n}{2}\, x\right)}{\sin\left(\dfrac{x}{2}\right)}$$



and evaluate where $x=2\pi/n$ and $y=0$ to deduce that the real part is zero.


Sunday, September 18, 2016

real analysis - Let $f:mathbb{I} to mathbb{R}$ continuous function such that $f(0)=f(1)$.

$\mathbb{I} = [0,1]$



Let $f:\mathbb{I} \to \mathbb{R}$ continuous function such that $f(0)=f(1)$. Prove that for all $n \in \mathbb{N} $ there $ x \in \mathbb{I}$ such that $ x + \frac{1}{n} \in \mathbb{I}$ and $f( x + \frac{1}{n})=f(x)$



Could you help me by giving me an idea of ​​how to do it?

trigonometry - Finding a closed form for $cos{x}+cos{3x}+cos{5x}+cdots+cos{(2n-1)x}$





We have to find




$$g(x)=\cos{x}+\cos{3x}+\cos{5x}+\cdots+\cos{(2n-1)x}$$





I could not get any good idea .



Intialy I thought of using



$$\cos a+\cos b=2\cos(a+b)/2\cos (a-b)/2$$


Answer



Let $z=\cos\theta+i\sin\theta$ i.e. $z=e^{i\theta}$



Your sum:$$e^{i\theta}+e^{3i\theta}+e^{5i\theta}+...e^{(2n-1)i\theta}$$




This is a GP with common ratio $e^{2i\theta}$



Therefore sum is $$\frac{a(r^n-1)}{r-1}$$
$$\frac{e^{i\theta}(e^{2ni\theta}-1)}{e^{2i\theta}-1}$$
$$\frac{(\cos \theta+i\sin\theta)(\cos(2n\theta)+i\sin\theta-1)}{\cos(2\theta)+i\sin(2\theta)-1}$$



Computing it's real part should give you the answer



Acknowledgement:Due credits to @LordShark Idea



Saturday, September 17, 2016

limits - Evaluating $ limlimits_{xto 0} left(frac{x^4 + 2 x^3 + x^2}{{tan}^{-1} x}right)$


In a question from a class test, we are given this function: $$f(x) = \begin{cases} \frac{x^4 + 2 x^3 + x^2}{{\tan}^{-1} x}, & \text{if $ x \neq 0$} \\[2ex] 0, & \text{if $x = 0$} \end{cases}$$ We are asked to find whether $f(x)$ is continuous at $x=0$ .


Now, we can get the solution by Taylor expansion or L'Hopital's rule quite easily. But, L'Hopital's rule and Taylor expansions aren't a part of my course syllabi this year so I don't think they need to be applied here.


But I can't figure out how to evaluate this: $$\lim_{x \to 0} \left(\frac{x^4 + 2 x^3 + x^2}{{\tan}^{-1} x}\right)$$ without these methods.


I think the first step should be factorizing the numerator to get $$f(x) = \frac {x^2(x+1)^2}{{\tan}^{-1}x}$$


Now I don't know how to proceed further. Is there some identity that can be used here?



Answer



With the derivative :


$\displaystyle \lim_{x\to 0}\frac{\arctan(x)- \arctan(0)}{x-0}=f'(0)=\dfrac{1}{1+(0)^2}=1\iff \displaystyle \lim_{x\to 0}\frac{\arctan(x)}{x}=1$


Thus :


$\displaystyle \lim_{x \to 0} \dfrac{x^4 + 2 x^3 + x^2}{\arctan x}=\lim_{x \to 0} \dfrac{x^3 + 2 x^2 + x}{\frac{\arctan(x)}{x}}=0$


combinatorics - Find the number of all four-digit positive integers that are divisible by four and are formed by the digits 0,1,2,3,4,5



Find the number of all four-digit positive integers that are divisible by four and are formed by the digits 0,1,2,3,4,5.







The combination for all numbers would be $6^4$, but we have a few roadblocks to account for. First off 0 must be taken into account. If 0 were to be the first number it would only be a three digit number therefore:



$6^4-6^3=1080$



So we know that the number of possibilities that are divisible by 4 is less than 1080.







This is where I get stuck. We must account for the numbers that are divisible by 4. For a four digit number we have four place holders _ _ _ _. The first two placeholders do not matter. So for those locations we can denote $6^2$.



However I must account for the first placeholder. 0 cannot be a placeholder, so I'm not sure how to denote its possibility from here. I have a two element variation with repetition from {0,1,...5}. But I must account for the zero. If I simply had two variations that did not account for zero it would be $6^2$. So is it possible for me to use the same approach I used earlier?



$6^2-6^1$






The last two placeholders determine divisibility. In order for the four-digit number to be divisible by 4 the number created by the last four digits must also be divisible by 4.




From 0,1,2,3,4,5,6 we have



$4,8,12,16,20,24,28,32,36,40,44,48,52,56$



and from those selections we have
$04,12,20,24,32,40,44,52$ which gives us 8 possibilities.



I'm a little confused when to use the multiplication rule so I'm not sure if this is acceptable.




If my work is right would $(6^2-6)*8$ be the correct answer?



$(6^2-6)*8 = 240 < 1080$


Answer



It seems correct to me, though you may have made it more complicated than it needs to be.



The multiplication rule is quite appropriate here. We have four slots to fill; the first can be filled in five ways (since it can't be zero), the second can be filled in six ways, and the last two together can be filled in nine ways (as Arturo Magidin pointed out, 00 works as well). This gives us $5*6*9 = 270$ possibilities.


Friday, September 16, 2016

complex numbers - Simplify $sqrt{-3}$

I was reading about this known fallacy
$$
-1 = i^2 = i \cdot i = \sqrt{-1} \cdot \sqrt{-1} = \sqrt{(-1)(-1)} = \sqrt{1} = 1
$$
and according to Wikipedia "The fallacy is that the rule $\sqrt{xy} = \sqrt{x}\sqrt{y} $ is generally valid only if both x and y are positive"



So my question is, how come we can say that $\sqrt{-3} = \sqrt{3}i$ ?. Aren't we applying the same mistake as the fallacy? Like $\sqrt{-3} = \sqrt{(-1)(3)} = \sqrt{-1}\sqrt{3} = \sqrt{3}i$ cannot be since -1 is negative.




Thanks for reading.

calculus - U substitution of indefinite integrals like $int frac{5+3x}{1+x^2} dx$.

I've spent the better part of today trying to understand conceptually how to solve indefinite integrals using the "u-substitution" method.




I am able to solve relatively easy indefinite integrals using u-substituion, but when it comes to more complicated ones I struggle and never end up with the correct answer which means i do not fully understand what is going on and am simply memorizing basic procedure for solving basic indefinite integrals.



For example:



$\int \frac{5+3x}{1+x^2} dx$



I did the following:



$\frac{5}{1+x^2} + \frac{3x}{1+x^2} dx$




$u = 1+x^2$



$du = 2x dx$



$\frac{1}{2x} du = dx$



And from there I am stuck and to be honest I don't even know if my approach is correct.



I know I am asking a lot, but is there anyone that can solve this and explain why they did what they did?

elementary set theory - Finding explicit bijections between sets of the same cardinality


This is kind of a general question about establishing that two sets have the same cardinality. The definition of two sets $S$ and $T$ having the same cardinality is that there is a function $f: S \to T$ that is one-to-one and onto all of $T$.


My question is: are there any two sets for which we (and by that I mean "the mathematical community" or whatever) know that the two sets have the same cardinality by other means, but cannot find a bijection between them? I feel like the continuum hypothesis is in some vague sense related to this, but anyways I'm curious and would like to know.


Answer



This is independent from ZFC.


There is a first-order formula $\varphi$ such that $$\mathrm{ZFC} + V=L \vdash(\forall S)(\forall T)\Big(\big(\operatorname{card}(S)=\operatorname{card}(T)\big)\rightarrow\big(\{x \mid \varphi(x, S, T)\}\text{ is a bijection from }S\text{ to }T\big)\Big).$$


So it's consistent with ZFC that we can explicitly specify a bijection between any two sets of the same cardinality (assuming that ZFC is consistent, of course). [By the way, the formula $\varphi$ can be spelled out — it's not mysterious, just long if written out in full.]


On the other hand, if, for example, you use finite forcing to make $\aleph_1^{\,L}$ countable, then, in the generic extension, there is a bijection between $\omega$ and $\aleph_1^{\,L},$ but there is no bijection between them that is definable without parameters.



So (again assuming the consistency of ZFC), there is a model of ZFC in which $\aleph_1^{\,L}$ is countable but in which there is no definable bijection between $\omega$ and $\aleph_1^{\,L}.$ $$ $$



Some brief remarks on how to prove the statements above:


The existence of explicit bijections in $L$ is due to the fact that there is, if $V=L,$ a definable well-ordering of the universe.


The non-existence of a definable bijection between the two specified countable sets in the generic extension above can be proven using the homogeneity of the partial ordering used in the forcing argument.


summation - Showing that $1-1/2+ cdots +1/(2n-1)-1/2n$ is equal to $1/(n+1)+1/(n+2)+ cdots +1/(2n)$



$1-1/2+1/3-1/4+ \cdots +1/(2n-1)-1/2n=1/(n+1)+1/(n+2)+ \cdots +1/2n$



I was asked to prove by mathematical induction the validity of the above equation. It isn't hard to prove that it holds for any arbitrary natural number. But how mathematicians (or anyone) discovered that the left side of that equation equals to the right side, it doesn't seem obvious. I've tried to manipulate them with various means such as multiplying the denominator but I can't observe any pattern. Is it by chance that this equation was discovered? Thanks in advance.


Answer



I don't know how someone first discovered this identity, but here's a clearer way of seeing it: \begin{align*} 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots - \frac{1}{2n} &= 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots + \frac{1}{2n}- 2\left(\frac{1}{2} + \frac{1}{4} + \frac{1}{6} + \cdots + \frac{1}{2n} \right)\\ &= 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots + \frac{1}{2n}-\left(1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \right) \end{align*}



This cancels out the first $n$ terms of the sequence, leaving the $(n+1)^\text{st}$ to $2n^\text{th}$ terms, which is the righthand side.


Thursday, September 15, 2016

complex numbers - I know there are three real roots for cubic however cubic formula is giving me non-real answer. What am I doing wrong?



I want to solve the equation $x^3-x=0$ using this cubic equation. For there to be real roots for the cubic (I know the roots are $x=-1$, $x=0$, $x=1$), I assume there must be a positive inside the inner square root. (Or is that wrong?)



However, when I substitute in $a=1$, $b=0$, $c=-1$, $d=0$, the square root term inside the cube root terms becomes




$$\sqrt{\;\left(\;2(0)^3 - 9(1)(0)(-1) + 27(1)^2(0)\;\right)^2 - 4 \left(\;(0)^2 - 3(1)(-1)\;\right)^3\quad}$$



It gives me $\sqrt{-108}$, which is $10.39i$. Now that I have a non-real number as part of the equation I can't see any way for it to be cancelled or got rid of, even though I know there is a real answer.



Could somebody please tell me how I can get a real answer and what I am doing wrong? Thanks.


Answer



I didn't check your calculations, but it seems you got into the point which actually made people turn their attention to complex numbers. Complex numbers were not "invented" in order to solve quadractic equations as some people usually tell us, but to solve cubic equations. People discovered a formula for the roots of a cubic polynomial, but then later discovered that in some situations (actually, a lot of them), the equations passed necessarily over complex numbers. Actually, even in situations where all roots are real this happen.



If you do the calculations correctly, the complex numbers will cancel themselves in the end.



calculus - Why is $sin^{-1}(sin(frac{5pi}{8}))ne frac{5pi}{8}$?


I am defining a function and making sure that it works. I thought $\sin^{-1}(\sin(x))=x$ but if I put it into a calculator I get $\sin^{-1}(\sin(\frac{5\pi}{8}))\approx1.178$; which is not $\frac{5\pi}{8}\approx1.96$.


What is the reason for this?


Answer



If you translate the statement what is $\sin^{-1}(\sin(\frac{5\pi}{8}))$ into a statement in English it would say the following:


What angle between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$ has the same sine value as the angle $\frac{5\pi}{8}$?



If you draw the unit circle and the angle $\frac{5\pi}{8}$ and then draw a horizontal line through the point where the angle intersects the unit circle, I believe you will see that the horizontal line also intersects the unit circle in quadrant I at the point of intersection with the angle $\frac{3\pi}{8}$. So the answer is $\frac{3\pi}{8}$.


unit circle diagram


calculus - continuity of $f ' $

if $f$ is differentiable on $(a,b)$ can we say that $f'$ is continuous on $(a,b)$?



I tried some functions and it seems that we can say so but I'm not sure.

Wednesday, September 14, 2016

soft question - What is this pattern called?



Back-Story



I became interested in the patterns in multiplication tables for different base number systems a while ago. Specifically, the pattern made by the last digit of each number in the multiplication table. So, base 10 would look like this:



1|2|3|4|5|6|7|8|9|0
2|4|6|8|0|2|4|6|8|0
3|6|9|2|5|8|1|4|7|0
4|8|2|6|0|4|8|2|6|0

5|0|5|0|5|0|5|0|5|0
6|2|8|4|0|6|2|8|4|0
7|4|1|8|5|2|9|6|3|0
8|6|4|2|0|8|6|4|2|0
0|0|0|0|0|0|0|0|0|0


I thought it was interesting that when you move to number systems with different bases, the patterns don't follow the number. They follow it's relative position in the number system. E.g., in base 12, the pattern is 6,0,6,0,6......



Images




I then realized, I could see the pattern better if I just assigned each number a color. I started with using 10 greyscale colors, with 0 being black and 9 being white. So now base 10 looks like this:



Base 10 in color form



Then, I figured that I really could use as many colors as I wanted to, and see if a larger pattern forms. Using all 256 greyscale colors, I came up with this image representing a base 256 multiplication table:



Base 256 greyscale



Or I could go from black to white to black, and smooth out the image:




Base 511 greyscale



Animation



I decided to animate the pattern to better see what was going on. To do this, I defined my 1-n color scale as [w,w,w,w,b,b,b,b,b,b,b,b....]. Where w is white and b is black. I would create a frame, shift my colors down one [b,w,w,w,w,b,b,b,b,b,b,b....], and create the next frame. I repeated this until they colors fully cycled and got this animated image.



Here's a site where you can modify the settings.



What is this pattern called?




My question is, what is this pattern called? I'm having a hard time finding anything about it. It seems to be a bunch of hyperbolic curves imposed on each other. There is a bunch of "stars" at the corners of where you would divide the image into 4ths, 9ths, etc.



Any insight into this would be appreciated.


Answer



What you've discovered is essentially modular arithmetic. By looking at only the last digits of a product (in whatever base you're looking at at the moment), you're in effect saying 'I don't care about things that differ by multiples of $n$; I want to consider them as the same digit'. For instance, in base $7$, $5\times 2=10_{10}=13$ has the same last digit as $4\times 6=24_{10}=33$; we put both of these numbers into a bucket labeled '$[3]$', along with $3$, $23=17_{10}$, $43=31_{10}$, etc. In mathematics, when we talk about $31 \bmod 7$ we sometimes just mean the number $3$ itself (that is, the 'label' on this bucket that's between $0$ and $6$, but it's often convenient to think of it as representing the whole bucket: whatever number we pick out of the $[3]$ bucket, when we add it to a number in the $[2]$ bucket, we know that our result will be in the $[5]$ bucket, and when we multiply a number in the $[3]$ bucket by a number in the $[4]$ bucket, we know that our result will be in the $[5]$ bucket; etc. "Last digits" are just a convenient way of talking about these buckets (though things get a little sketchier when you talk about negative numbers - note that according to these rules, $-3$ goes into the $[4]$ bucket!).



Meanwhile, the bands in your pattern are actually (pieces of) hyperbolas. Since $a\times (n-b)\equiv -(a\times b)\pmod n$ (the statement '$x=y\pmod n$' is a mathematical way of phrasing '$x$ and $y$ are in the same bucket in base $n$'; here, the difference between $a\times (n-b)$ and $-(a\times b)$ is $a\times n$), the far right hand side is essentially a reflection of the left, and similarly the bottom is a reflection of the top. If you rearrange the four quarters of your square so that the center of symmetry is (what was previously) the top left corner — i.e., take $A\ B\atop C\ D$ to $D\ C\atop B\ A$ — and then put the origin at the center, then the bands will exactly be (scaled versions) of the hyperbolae $xy=C$ (which are the hyperbolae $y^2-x^2=2C$ rotated by $45^\circ$). This happens because each 'cycle' of black-to-white or black-to-white-to-black will be separated by one multiple of $n$; e.g., the first transition between cycles occurs along the hyperbola $xy=n$; the second along the hyperbola $xy=2n$; etc.



(As for the moiré patterns, they're related to the usual way that such patterns are generated, and in particular they're somewhat related to aliasing near the Nyquist limit when the frequency between hyperbolic bands starts coming close to the frequency of the 'pixels' you're sampling with, but that's another story altogether...)



linear algebra - linearly independence of functions over $mathbb{R}$



I have been asked to prove that $\{\tan(ax)|a\in \mathbb{R}^+\}$ is linearly independent. I was wondering if there is a generic method/idea for proving linear independence of functions over $\mathbb{R}$.


Answer



Hint: Since $\tan \in C^{\infty}(\mathbf{R})$. Pick any finite number of $a_i$'s from $\mathbf{R}$. Show the Wronskian of $\tan{a_1x},\dots,\tan{a_nx}$ is nonzero.


exponentiation - Power Set and Empty Set question




I have a question regarding the set of functions resulting from a set raised to a power. I think I have part of the understanding correct, however I'm having trouble interpreting $Y^{\emptyset}$. I have read other posts and reference them at the end. It's my understanding $Y^{X}$ is as follows:



$Y^{X} = \{f_{0}, ..., f_{n}\}$



Where $f_{n}$ = $\{(x_{0}, y_{0}), ..., (x_{n}, y_{n})\}$ and each is a total single-valued function.



For example, all the functions resulting in set inclusion and exclusion (the Power Set $\mathcal{P}(X$)) is:



$X = \{True, False\}$

$Y = \{Admit, Exclude\}$



$Y^{X} = \{f_{0}, f_{1}, f_{2}, f_{3}\}$



$f_{0} = \{(True, Admit), (False, Admit)\}$
$f_{1} = \{(True, Exclude), (False, Exclude)\}$
$f_{2} = \{(True, Admit), (False, Exclude)\}$
$f_{3} = \{(True, Exclude), (False, Admit)\}$



$\mid Y^{X} \mid = card(Y^{X}) = \mid Y \mid^{\mid X \mid} = \mid \mathcal{P}(X) \mid = 4 $




A side note, $f_{2}, f_{3}$ are surjective and injective functions and result in a dichotomy for the truth function.



$1^{n} = 1, n \gt 0$, results in a single function $f_{0} = \{(0, 0), (1, 0), ..., (n, 0)\}$. This seems intuitive.



I begin to get confused for the $Y^{\emptyset}$ case. From other posts here and Wiki, this is as follows:




  • Algebra and Set Theory Definition: $\emptyset^{\emptyset} = Y^{\emptyset} = 1$



    • Based on the "empty function", $Y^{\emptyset} = \{\emptyset\}$

    • $2^2 = 1 * 2 * 2 = 4, 2^1 = 1 * 2 = 2, 2^0 = 1$ : dividing by 2 each time, where 1 is implicit in the multiplication.


  • Math Analysis Definition: $\emptyset^{\emptyset} = undef$



In above cases where the $X = \emptyset$, I'm confused how there could be any function between the empty set $X$ and base set $Y$. The empty set has no elements to map in a function. In this case, undef seems to fit this better. Can anyone provide guidence here?



$\emptyset^{n} = 0$ where $n \gt \emptyset$, makes sense to me because there are no functions that map between $n$ and $\emptyset$.




Perhaps it's because I'm looking at this as follows?



yn             *

y1 *

y0 *
x0 x1 ... xn



where the $*$ indicate an ordered pair, all of which make up a single function provided it is single-valued. The result of $Y^{X}$ is all of these unique functions.



UPDATE



Case (a) $0^{0} = 1$ because $x \notin \emptyset$ and therefore properties of a function are satisfied and $0 \subseteq X \times Y$. Case (b) $0^{1} = 0$ because properties of a function are not satisfied, $0 \in 1 \land y \notin 0$. Case (c) $1^{0} = 1$ because of Case (a). Case (d) $1^{1} = 1$ because $\emptyset \in 1 \land \emptyset \in 1$ and $\{(0, 0)\} \subseteq (1 \times 1)$.



Previous Post References: Prior Post
Prior Post


Answer




A map $f\colon X\to Y$ is a subset of $X\times Y$ with the following properties:




  1. for every $x\in X$, there exists $y\in Y$ with $(x,y)\in f$;

  2. for every $x\in X$ and every $y_1,y_2\in Y$, if $(x,y_1)\in f$ and $(x,y_2)\in f$, then $y_1=y_2$.



The first property ensures that every element of $X$ has an image, the second property ensures the image is uniquely defined.



If $X=\emptyset$, then there is a single subset of $\emptyset\times Y$, namely the empty set, which satisfies the properties above (because there is no way they can be false). You are questioning about what is mapped where: you have to assign an image to every element of $X$, if there's no element you're already done, aren't you?




Thus the set of maps $Y^\emptyset$ is a singleton consisting of the empty set:
$$
Y^\emptyset=\{\emptyset\}
$$

has cartinality $1$. Note that $Y$ has no special role here and can be any set.



The problem is with $Y=\emptyset$, because $\emptyset^X$ is empty whenever $X\ne\emptyset$, because you have no element where to map the elements of $X$; but there's no problem when $X=\emptyset$ as well, because of the argument above. Thus
$$
|\emptyset^X|=\begin{cases} 1 & X=\emptyset \\[4px] 0 & X\ne\emptyset \end{cases}

$$



Facts regarding limits and indeterminate forms have nothing to do with this combinatorial framework.


Tuesday, September 13, 2016

calculus - Evaluating $lim_{xtofrac{pi}{4}}frac{1-tan x}{1-sqrt{2}sin x}$



How can I evaluate $$\lim_{x\to\frac{\pi}{4}}\frac{1-\tan x}{1-\sqrt{2}\sin x}$$ without L'Hopital rule. Using L'Hopital rule, it evaluates to 2. Is there a way to do it without using L'Hopital?


Answer



Multiply by the conjugate and use trig identities, factoring appropriately:
\begin{align*}

\lim_{x\to\frac{\pi}{4}}\frac{1-\tan x}{1-\sqrt{2}\sin x}
&= \lim_{x\to\frac{\pi}{4}}\frac{1-\tan x}{1-\sqrt{2}\sin x} \cdot \frac{1 + \sqrt{2}\sin x}{1 + \sqrt{2}\sin x} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{(1-\tan x)(1 + \sqrt{2}\sin x)}{1 - 2\sin^2 x} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{(1-\frac{\sin x}{\cos x})(1 + \sqrt{2}\sin x)}{(1 - \sin^2 x) - \sin^2 x} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{(1-\frac{\sin x}{\cos x})(1 + \sqrt{2}\sin x)}{\cos^2 x - \sin^2 x} \cdot \frac{\cos x}{\cos x} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{(\cos x - \sin x)(1 + \sqrt{2}\sin x)}{\cos x(\cos x - \sin x)(\cos x + \sin x)} \\
&= \lim_{x\to\frac{\pi}{4}}\frac{1 + \sqrt{2}\sin x}{\cos x(\cos x + \sin x)} \\
&= \frac{1 + \sqrt{2}\sin \frac{\pi}{4}}{\cos \frac{\pi}{4}(\cos \frac{\pi}{4} + \sin \frac{\pi}{4})} \\
&= \frac{1 + \sqrt{2}(\frac{1}{\sqrt 2})}{\frac{1}{\sqrt 2}(\frac{1}{\sqrt 2} + \frac{1}{\sqrt 2})}
= \frac{1 + 1}{\frac{1}{\sqrt 2}(\frac{2}{\sqrt 2})}

= \frac{2}{2/2} = 2
\end{align*}


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...