Sunday, July 31, 2016

analysis - Why is Gammaleft(frac12right)=sqrtpi ?

It seems as if no one has asked this here before, unless I don't know how to search.



The Gamma function is
Γ(α)=0xα1exdx.
Why is
Γ(12)=π ?
(I'll post my own answer, but I know there are many ways to show this, so post your own!)

analysis - How to show textrmsupp(fg)subseteqtextrmsupp(f)+textrmsupp(g)?


Let f,gC0(Rn) where C0(Rn) is the set of all continuous functions on Rn with compact support. In this case (fg)(x)=Rnf(xy)g(y) dy, is well defined.


How can I show supp(fg)supp(f)+supp(g)?


This should be easy but I can't prove it.


I tried to proceed by contradiction as follows: Let xsupp(fg). If xsupp(f)+supp(g) then (xsupp(f))supp(g)=ϕ. This should give me a contradiction but I can't see it.


Answer



If fg(x)0 then Rnf(xy)g(y)dy0, so there exists yRn such that f(xy)g(y)0, hence g(y)0 and f(xy)0, take z=xy then x=z+y with f(z)0 and g(y)0. Now we get {fg0}{f0}+{g0}supp(f)+supp(g), so supp(fg)supp(f)+supp(g).


calculus - How to show that sqrtx grows faster than lnx.



So I have the limit lim I now want to motivate why (3\ln{x}/\sqrt{x})\rightarrow0 as x\rightarrow\infty. I cam up with two possibilites:




  1. Algebraically it follows that \frac{3\ln{x}}{\sqrt{x}}=\frac{3\ln{x}}{\frac{x}{\sqrt{x}}}=\frac{3\sqrt{x}\ln{x}}{x}=3\sqrt{x}\cdot\frac{\ln{x}}{x},

    Now since the last factor is a standard limit equal to zero as x approaches infinity, the limit of the entire thing should be 0. However, isn't it a problem because \sqrt{x}\rightarrow\infty as x\rightarrow \infty gives us the indeterminate value \infty\cdot 0?


  2. One can, without having to do the arithmeticabove, directly motivate that the function f_1:x\rightarrow \sqrt{x} increases faster than the function f_2:x\rightarrow\ln{x}. Is this motivation sufficient? And, is the proof below correct?




We have that D(f_1)=\frac{1}{2\sqrt{x}} and D(f_2)=\frac{1}{x}. In order to compare these two derivateives, we have to look at the interval (0,\infty). Since D(f_1)\geq D(f_2) for x\geq4, it follows that f_1>f_2, \ x>4.


Answer




  1. This is a standard result from high school

  2. If you nevertheless want to deduce it from the limit of \dfrac{\ln x}x, use the properties of logarithm:
    \frac{\ln x}{\sqrt x}=\frac{2\ln(\sqrt x)}{\sqrt x}\xrightarrow[\sqrt x\to\infty]{}2\cdot 0=0



sequences and series - Evaluate the Sum S=frac{1}{4}+frac{1.3}{4.6}+frac{1.3.5}{4.6.8}+cdots infty

Evaluate the Sum




S=\frac{1}{4}+\frac{1.3}{4.6}+\frac{1.3.5}{4.6.8}+\cdots \infty





My try: We have the n th term as



T_n=\frac{1.3.5. \cdots (2n-1)}{4.6.8 \cdots (2n+2)} \implies



T_n=\frac{1.3.5. \cdots (2n-1)}{2^n \times (n+1)!}



T_n=\frac{(2n)!}{4^n \times n! \times (n+1)!}




T_n=\frac{\binom{2n}{n-1}}{n \times 4^n}



Any clue here?

Friday, July 29, 2016

radicals - Can you get any irrational number using square roots?



Given an irrational number, is it possible to represent it using only rational numbers and square roots(or any root, if that makes a difference)?



That is, can you define the irrational numbers in square roots, or is it something much deeper than that? Can pi be represented with square roots?



Answer



The smallest set of numbers closed under the ordinary arithmetic operations and square roots is the set of constructible numbers. The number \sqrt[3]{2} is not constructible, and this was one of the famous greek problems: the duplication of the cube.



If you allow roots of all orders, then you're talking about equations that can be solved by radicals. Galois theory explains which equations can be solved in this way. In particular, x^5-x+1=0 cannot. But it clearly has a real root.


Extension of the additive Cauchy functional equation


Let f\colon (0,\alpha)\to \def\R{\mathbf R}\R satisfy f(x + y)=f(x)+f(y) for all x,y,x + y \in (0,\alpha), where \alpha is a positive real number. Show that there exists an additive function A \colon \R \to \R such that A(x) = f(x) for all x \in (0, \alpha). Simply I want to define a function A In specific form as an extension of the function f wich is additive functional equation. I tried to define the function A .


Answer



Let x > 0. Choose n \in \def\N{\mathbf N}\N with \frac xn < \alpha. Define A(x) := nf(\frac xn). Note that this is well-defnined: If m\in \N is another natural number such that \frac xm < \alpha, we have \begin{align*} mf\left(\frac xm\right) &= mf\left(\sum_{k=1}^n \frac x{mn}\right)\\ &= m\sum_{k=1}^n f\left(\frac x{mn}\right)\\ &= \sum_{l=1}^m n f\left(\frac x{mn}\right)\\ &= nf\left(\sum_{l=1}^m \frac x{mn}\right)\\ &= nf\left(\frac x{n}\right). \end{align*} For x < 0 choose n \in \N with \frac xn > -\alpha and define A(x) := -nf(-\frac xn), finally, let A(0) = 0. Then A is an extension of f, to show that it is additive, let x,y \in \def\R{\mathbf R}\R. Choose n \in \N such that \frac xn, \frac yn, \frac{x+y}n \in (-\alpha, \alpha). We have if x,y \ge 0: \begin{align*} A(x+y) &= nf\left(\frac{x+y}n\right)\\ &= nf\left(\frac xn\right) + nf\left(\frac yn\right)\\ &= A(x) + A(y) \end{align*} If both x,y \le 0, we argue along the same lines. Now suppose x \ge 0, y \le 0, x+y \ge 0. We have A(y) = -A(-y) be definition of A. Hence \begin{align*} -A(y) + A(x+y) &= A(-y) + A(x+y)\\ &= A(-y+x+y)\\ &= A(x). \end{align*} If x \ge 0, y \le 0, x+y \le 0, we have -x \le 0 and \begin{align*} -A(x) + A(x+y) &= A(-x) + A(x+y)\\ &= A(y) \end{align*}



trigonometry - Converting a sum of trig functions into a product




Given,
\cos{\frac{x}{2}} +\sin{(3x)} + \sqrt{3}\left(\sin\frac{x}{2} + \cos{(3x)}\right)
How can we write this as a product?



Some things I have tried:




  • Grouping like arguments with each other. Wolfram Alpha gives \cos{\frac{x}{2}} + \sqrt{3}\sin{\frac{x}{2}} = 2\sin{\left(\frac{x}{2} + \frac{\pi}{6}\right)}but I don't know how to derive that myself or do a similar thing with the 3x.

  • Write 3x as 6\frac{x}{2} and then using the triple and double angle formulas, but that is much too tedious and there has to be a more efficient way.

  • Rewriting \sqrt{3} as 2\sin{\frac{\pi}{3}} and then expanding and trying to use the product-to-sum formulas, and then finally grouping like terms and then using the sum-to-product formulas, but that didn't work either.




I feel like I'm overthinking this, so any help or insights would be useful.


Answer



\cos{\frac{x}{2}} +\sin(3x) + \sqrt{3}\left(\sin\frac{x}{2} + \cos(3x)\right)



=\cos{\frac{x}{2}} + \sqrt{3}\sin\frac{x}{2} +\sin(3x) + \sqrt{3}\cos(3x)



=2\left(\frac{1}{2}\cos\frac{x}{2} + \frac{\sqrt{3}}{2}\sin\frac{x}{2} +\frac{1}{2}\sin(3x) + \frac{\sqrt{3}}{2}\cos(3x)\right)



Note that \frac{1}{2}=\sin\frac{\pi}{6} and \frac{\sqrt{3}}{2}=\cos\frac{\pi}{6} so:




=2\left(\sin\frac{\pi}{6}\cos\frac{x}{2} + \cos\frac{\pi}{6}\sin\frac{x}{2} +\sin\frac{\pi}{6}\sin(3x) + \cos\frac{\pi}{6}\cos(3x)\right)



Then using Addition Theorem:



=2\left(\sin\left(\frac{x}{2}+\frac{\pi}{6}\right)+\cos\left(3x-\frac{\pi}{6}\right)\right)



=2\left(\sin\left(\frac{x}{2}+\frac{\pi}{6}\right)+\sin\left(3x+\frac{\pi}{3}\right)\right)



Then using Sums to Products:




=4\left(\sin\left(\frac{\frac{x}{2}+\frac{\pi}{6}+3x+\frac{\pi}{3}}{2}\right)\cos\left(\frac{\frac{x}{2}+\frac{\pi}{6}-3x-\frac{\pi}{3}}{2}\right)\right)



=4\sin\left(\frac{7x+\pi}{4}\right)\cos\left(\frac{-15x-\pi}{12}\right)


Thursday, July 28, 2016

elementary set theory - Give an explicit mapping to show the same cardinality...




enter image description here



So I know that I need to form a bijection, I just need the 2 functions for the different sets. I know that the function for all positive integers divisible by 5 is f(x) = 5x, however I have no clue how to find the function that maps the triangle numbers. I've been attempting trial and error, with no success. Is there any process that would make finding the function easier? I've been comparing the differences of each interval, but still nothing. Any hints to the right track would be great.



thanks


Answer



I think that we can just look at how the triangle numbers are constructed and how the positive integers divisible by 5 are constructed and we can see a bijection naturally.



The n^{\mathrm{th}} triangle number T_n is given as T_n = 1+2+3+\dotsb+n = \frac{n(n+1)}{2}. This is one of those series that you will just have to have seen enough times (in a number theory class or discrete math class) in order to recognize consistently.




The n^{\mathrm{th}} multiple of 5 is given by 5n.



It looks like it will be easiest to take a bijection from the multiples of 5 to the triangles numbers, so we can say that we need a function f that does the following:
f(5n) = \frac{n(n+1)}{2}
Having 5n as an argument to f is weird so we can just do a little substitution, letting 5n \to x.
f(5n) = \frac{n(n+1)}{2} \quad\implies\quad f(x) = \frac{\frac{x}{5}(\frac{x}{5}+1)}{2}
And now f is a function that maps multiples of 5 to triangle numbers.


divisibility - The problem of ten divided by three








I was thinking about this the other day...



if 10/3 = 3.33333... (series)



why doesn't 3.333... * 3 = 10 it can never be 10 it's always almost 10 or 9.9999... (infinity)



I have read about this but then no one has an answer yet we all accept the fact that it is true when the statement is fundamentally false.

number theory - Is sqrt1+sqrt2+dots+sqrt n ever an integer?




Related: Can a sum of square roots be an integer?




Except for the obvious cases n=0,1, are there any values of n such that \sum_{k=1}^n\sqrt k is an integer? How does one even approach such a problem? (This is not homework - just a problem I thought up.)


Answer




No, it is not an integer.



Let $p_1=2The Galois group G is an elementary abelian 2-group. An automorphism \sigma\in G is fully determined by a sequence of k signs s_i\in\{+1,-1\}, \sigma(\sqrt{p_i})=s_i\sqrt{p_i}, i=1,2,\ldots,k.



See this answer/question for a proof of the dimension of this field extension. There are then several ways of getting the Galois theoretic claims. For example we can view K as a compositum of linearly disjoint quadratic Galois extensions, or we can use the basis given there to verify that all the above maps \sigma are distinct automorphisms.



For the sum S_n=\sum_{\ell=1}^n\sqrt{\ell}\in K to be a rational number, it has to be fixed by all the automorphisms in G. This is one of the basic ideas of Galois correspondence. But clearly $\sigma(S_n)

Wednesday, July 27, 2016

calculus - Integral int_0^{infty} frac{ln cos^2 x}{x^2}dx=-pi


I:=\int_0^{\infty} \frac{\ln \cos^2 x}{x^2}dx=-\pi. Using 2\cos^2 x=1+\cos 2x failed me because I ran into two divergent integrals after using \ln(ab)=\ln a + \ln b since I obtained \int_0^\infty x^{-2}dx and \int_0^\infty (1+\cos^2 x)dx which both diverge. Perhaps we should try a complex analysis approach? I also tried writing I(\alpha)=\int_0^\infty \frac{\ln \cos^2 \alpha \,x}{x^2}dx and obtained -\frac{dI(\alpha)}{d\alpha}=2\int_0^\infty \frac{\tan \alpha x}{x}dx=\int_{-\infty}^\infty\frac{\tan \alpha x}{x}dx. Taking a second derivative I''(\alpha)=\int_{-\infty}^\infty {\sec^2 (\alpha x)}\, dx Random Variable pointed out how to continue from the integral after the 1st derivative, but is it possible to work with this integral \sec^2 \alpha x? Thanks


Answer



Let the desired integral be denoted by I. Note that \eqalign{ 2I&=\int_{-\infty}^\infty\frac{\ln(\cos^2x)}{x^2}dx= \sum_{n=-\infty}^{+\infty}\left(\int_{n\pi}^{(n+1)\pi}\frac{\ln(\cos^2x)}{x^2}dx\right)\cr &=\sum_{n=-\infty}^{+\infty}\left(\int_{0}^{\pi}\frac{\ln(\cos^2x)}{(x+n\pi)^2}dx\right) \cr &=\int_{0}^{\pi}\left(\sum_{n=-\infty}^{+\infty} \frac{1}{(x+n\pi)^2}\right)\ln(\cos^2x)dx \cr &=\int_{0}^{\pi}\frac{\ln(\cos^2x)}{\sin^2x}dx \cr } where the interchange of the signs of integration and summation is justified by the fact that the integrands are all negative, and we used the well-known expansion: \sum_{n=-\infty}^{+\infty} \frac{1}{(x+n\pi)^2}=\frac{1}{\sin^2x}.\tag{1} Now using the symmetry of the integrand arround the line x=\pi/2, we conclude that \eqalign{ I&=\int_{0}^{\pi/2}\frac{\ln(\cos^2x)}{\sin^2x}dx\cr &=\Big[-\cot(x)\ln(\cos^2x)\Big]_{0}^{\pi/2}+\int_0^{\pi/2}\cot(x)\frac{-2\cos x\sin x}{\cos^2x}dx\cr &=0-2\int_0^{\pi/2}dx=-\pi. } and the announced conclusion follows.\qquad\square


Remark: Here is a proof of (1) that does not use residue theorem. Consider \alpha\in(0,1), and let f_\alpha be the 2\pi-periodic function that coincides with x\mapsto e^{i\alpha x} on the interval (-\pi,\pi). It is easy to check that the exponential Fourier coefficients of f_\alpha are given by C_n(f_\alpha)=\frac{1}{2\pi}\int_{-\pi}^{\pi}f_\alpha(x)e^{-inx}dx=\sin(\alpha\pi)\frac{(-1)^n}{\alpha \pi-n\pi} So, by Parseval's formula we have \sum_{n\in\Bbb{Z}}\vert C_n(f_\alpha)\vert^2=\frac{1}{2\pi}\int_{-\pi}^\pi\vert f_\alpha(x)\vert^2dx That is \sin^2(\pi\alpha) \sum_{n\in\Bbb{Z}}\frac{1}{(\alpha\pi-n\pi)^2}=1 and we get (1) by setting x=\alpha\pi\in(0,\pi).


trigonometry - Proof of an equality involving cosine sqrt{2 + sqrt{2 + cdots + sqrt{2 + sqrt{2}}}} = 2cos (pi/2^{n+1})




so I stumbled upon this equation/formula, and I have no idea how to prove it. I don't know how should I approach it:
\sqrt{2 + \sqrt{2 + \cdots + \sqrt{2 + \sqrt{\vphantom{\large A}2\,}\,}\,}\,}\ =\ 2\cos\left(\vphantom{\Large A}\pi \over 2^{n + 1}\right)



where n\in\mathbb N and the square root sign appears n-times.



I thought about using sequences and limits, to express the LHS as a recurrence relation but I didn't get anywhere.




edit: Solved, thanks for your answers and comments.


Answer



Hint:



Use induction and the half-angle formula for cosine.



Solution:



For n=1, the claim is true, since \cos(\pi/4)=\sqrt{2}/2. By the half-angle formula 2\cos(x/2)=\sqrt{2+2\cos(x)}

Therefore
\sqrt{2+\sqrt{2+\cdots+\sqrt{2+\sqrt{2}}}}=\sqrt{2+2\cos\left(\frac{\pi}{2^n}\right)}=2\cos\left(\frac{\pi}{2^{n+1}}\right)
where in the left square root expressions there are n square roots and in the first equality we have used the induction hypothesis that the claim holds for n-1.


integration - Evaluating the complex integral int_{-infty}^infty frac{cos(x)}{x+i},dx


I stumbled upon this particular integral a few minutes ago, and I have no idea how to go about it :


\int_{-\infty}^\infty \frac{\cos(x)}{x+i}\,dx


I looked up on the internet and I managed to find out that something called residue should be taken into account when dealing with such integrands, but I'm clueless at this point in matters of complex analysis. As context, a colleague of mine suggested this could be an interesting exercise.


Any ideas ?


Answer



If you are not aware of the residue theorem, things are harder, but not impossible.


Step 1. The integral is converging in virtue of Dirichlet's test, since \cos x has a bounded primitive and \left|\frac{1}{x+i}\right| decreases to zero as |x|\to +\infty;


Step 2. By symmetry (the cosine function is even) we have: I = \int_{\mathbb{R}}\frac{\cos x}{x+i}\,dx = -2i\int_{0}^{+\infty}\frac{\cos x}{1+x^2}\,dx so we just need to compute a real integral;


Step 3. We may compute the Fourier cosine transform of e^{-|x|} through intergration by parts. That gives that the CF of the Laplace distribution, by Fourier inversion, is everything we need to be able to state: \int_{0}^{+\infty}\frac{\cos x}{1+x^2}\,dx = \frac{\pi}{2e}.



trigonometry - How can we sum up sin and cos series when the angles are in arithmetic progression?



How can we sum up \sin and \cos series when the angles are in arithmetic progression? For example here is the sum of \cos series:



\sum_{k=0}^{n-1}\cos (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \cos \biggl( \frac{ 2 a + (n-1)\cdot d}{2}\biggr)



There is a slight difference in case of \sin, which is:
\sum_{k=0}^{n-1}\sin (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \sin\biggl( \frac{2 a + (n-1)\cdot d}{2}\biggr)



How do we prove the above two identities?



Answer



Let S = \sin{(a)} + \sin{(a+d)} + \cdots + \sin{(a+nd)} Now multiply both sides by \sin\frac{d}{2}. Then you have S \times \sin\Bigl(\frac{d}{2}\Bigr) = \sin{(a)}\sin\Bigl(\frac{d}{2}\Bigr) + \sin{(a+d)}\cdot\sin\Bigl(\frac{d}{2}\Bigr) + \cdots + \sin{(a+nd)}\cdot\sin\Bigl(\frac{d}{2}\Bigr)



Now, note that \sin(a)\sin\Bigl(\frac{d}{2}\Bigr) = \frac{1}{2} \cdot \biggl[ \cos\Bigl(a-\frac{d}{2}\Bigr) - \cos\Bigl(a+\frac{d}{2}\Bigr)\biggr] and \sin(a+d) \cdot \sin\Bigl(\frac{d}{2}\Bigr) = \frac{1}{2} \cdot \biggl[ \cos\Bigl(a + d -\frac{d}{2}\Bigr) - \cos\Bigl(a+d+\frac{d}{2}\Bigr) \biggr]



Then by doing the same thing you will have some terms cancelled out. You can easily see which terms are going to get Cancelled. Proceed and you should be able to get the formula.



I tried this by seeing this post. This has been worked for the case when d=1. Just take a look here:





Fourier series without Fourier analysis techniques



It is known that one can sometimes derive certain Fourier series without alluding to the methods of Fourier analysis. It is often done using complex analysis. There is a way of deriving the formula
\sum_{k = 1}^\infty \frac{\sin(kz)}{k} = \frac{\pi - z}{2}
using complex analysis for some z. In other words, it can be shown that \sum_{k = 1}^\infty \sin(kz)/k converges to (\pi - z)/2 for some z. My question is: How can we show that the formula above holds for z \in (0, 2\pi) without using Fourier analysis?



Edit:




Using blue's suggestion I concocted the following proof.



Let \log z be the principal value of the logarithm (with the branch cut along the negative real axis). Recall that e^{iz} - e^{-iz} = 2i\sin z and \text{Arg}(z) = \text{Arg}(z^*). Furthermore, we have
\begin{align*} \log(1 - e^{iz}) &= -\sum_{k = 1}^\infty \frac{e^{ikz}}{k} \\ &= \log|1 - e^{iz}| + i\theta \\ &= \log\left|2\sin\frac{z}{2}\right| + i\theta, \end{align*}
where \theta = \text{Arg}(1 - e^{iz}). Now, write 1 - e^{iz} = 1 - \cos z - i\sin z and let z \in (0, \pi). Then
\tan \theta = \frac{\sin z}{1 - \cos z} = \frac{\cos(z/2)}{\sin(z/2)} = \tan\left(\frac{\pi}{2} - \frac{z}{2}\right).

Hence, \theta = \pi/2 - z/2. Moreover, since z \in (0, \pi), we have \theta \in (0, \pi/2).
Using Euler's formula e^{iz} = \cos z + i \sin z and the fact that the sine function is odd, we see that equating the imaginary parts gives
\sum_{k = 1}^\infty \frac{\sin kz}{k} = \frac{\pi - z}{2}, \quad z \in (-\pi, 0) \cup (0, \pi).
Lastly, observe that (-\pi, 0) \cup (0, \pi) can be replaced by (0, 2\pi) due to the periodicity of the sine function.


Answer



In order to make sense of sines with complex arguments, you need to decompose them into complex exponentials. After that notice you're working with two series, each a Taylor expansion of the natural logarithm function (written appropriately). Proceed from there.


calculus - Improper integral: int_0^infty frac{sin^4x}{x^2}dx




I have been trying to determine whether the following improper integral converges or diverges:
\int_0^\infty \frac{\sin^4x}{x^2}dx



I have parted it into two terms. The first term: \int_1^\infty \frac{\sin^4x}{x^2}dx converges (proven easily using the comparison test), but the second term:
\int_0^1 \frac{\sin^4x}{x^2}dx troubles me a lot. What could I do?


Answer



For the second term, you can re-write your function as \frac{\sin x}{x} \cdot \frac{\sin x}{x} \cdot \sin^2(x). Note that on [0,1], this function is continuous (by the Fundamental Trig Limit you can extend the function to be defined as f(0)=1 at x=0). But then any continuous function on a closed interval is integrable, so \int_0^1 \frac{\sin^4(x)}{x^2} converges. Hence the whole integral converges.


Proof That all Positive Irrational Sqaure Roots Can be Raised to an Irrational Power to Get a Whole Number

Recently I have found out about a proof through a video.



This proof shows that an irrational number can be raised to a irrational power to get and irrational number, but this proof only requires one case, so I wanted to see if I could prove it for all irrational square roots. This is what I cam up with:




First take the square root of a positive integer, x.
\sqrt x Then apply the power for the square root of x: \sqrt x^{\sqrt x}. Some numbers might evaluate rationally here, if so stop. If not this can be continued to (\sqrt x^{\sqrt x})^{\sqrt x} = \sqrt x^{x}. Here if x is even the equation can be written as \sqrt x^{2n} = x^{n} where n is a positive integer, and the equation will evaluate to a whole number [Note 1]. If x is odd this can be continued to (\sqrt x^{x})^{\sqrt 2} = (\sqrt x^{\sqrt 2x}) [Note 2]. Then continue to raise it to the power of the square root of two once more. (\sqrt x^{x\sqrt 2})^{\sqrt 2} [Note 3]. This evaluates to \sqrt x^{2x} = x^x which must be a whole number!



Thank you for reading. Is this proof correct? Hopefully, I made no mistakes and everything here is right. I assume this has been proven before. If so, can I get a link and was my proof nice compared to the other one?



[Note 1]: You cannot say \sqrt x^{\sqrt x} could have evaluated to a whole number invalidating this proof. This case was already covered earlier.



[Note 2]: Because x is a positive integer multiplying it by a irrational number will make it irrational. The proof is still valid.




[Note 3] \sqrt 2^ \sqrt 2 is irrational

Tuesday, July 26, 2016

Formula for the sequence 0,3,8,15,24 ...

Out of my own interest I've been practicing finding formula for for sequences and I've been having trouble finding one for the nth term for this sequence.



0,3,8,15,24 ...



Clearly you add 5,7,9,11 ... to the previous number but if anyone had some insight about how to express this in a formula that would be much appreciated.

vector spaces - Linear independence and free basis?

How can I show whether the following vectors form a free basis of \mathbb Z^3?



(1,2,2), (-1,0,2), (2,-1,4)



Is a free basis the same as a normal basis and does the method for determining linear independence change when the vector space is \mathbb Z^3 rather than \mathbb R^3?

measure theory - Finding the norm of the operator M_g : L_p to L_p where g in L_{infty}




Suppose (X,\,\cal{A},\,\mu) is a measure space where \mu is a finite measure, and p \in (1,\,\infty). Say that we take some g \in L_{\infty}, and we define M_g : L_p \to L_p by M_g (f) = fg. Then I'd like to show that ||M_g|| = ||g||_{\infty}. I'm not absolutely sure this is true, but I think it is. I've shown that ||M_g|| \leq ||g||_{\infty}, which is pretty simple, but now I'd like to find a function f\in L_p such that \frac{||M_g (f)||_p}{||f||_p} = ||g||_{\infty}. Here's what I've got so far:



Let M=||g||_{\infty}, the essential supremum of |g|. Then let N be the subset of X on which |g(x)| < \frac{M}{2}. Now 0<\mu (N)<\mu(X)<\infty and 0<\mu(X\setminus N)<\infty too. These things follow from the definition of M and the fact that \mu is a finite measure. So define f:X \to \mathbb{R} as
f(x) = \begin{cases} 0 & x\in N \\ \frac{M}{|g(x)|(\mu(X \setminus N))^{\frac{1}{p}}} & x \in X \setminus N \end{cases} .




Then \begin{align} ||M_g (f)||_p ^p &= \int_{X\setminus N} \frac{M^p}{\mu(X \setminus N)} d\mu\\ &= M^p ,\,\text{ so} \\ ||M_g (f)||_p &= M. \end{align}



And we can easily show that f \in L_p because \frac{M}{|g(x)|} is not going to blow up when |g(x)| \geq \frac{M}{2}.



However, we don't have ||f||_p = 1, so we only have ||M_g (f)||_p = M, not \frac{||M_g (f)||_p}{||f||_p} = M as we require.



So the question is, can this example be tweaked easily to get a unit-normed f satisfying ||M_g (f)||_p = M, or is the whole thing off track?


Answer




With some small modifications, your proof can be converted into a working proof.



Instead of using |g(x)|<\frac M2, we use
|g(x)|<\frac Ma, where a>1 is arbitrary.
Then, the rest of the proof can be repeated as you did.



We have to calculate \|f\|_p:
\begin{aligned} \|f\|_p^p & = \int_{X\setminus N} \frac{M^p}{|g(x)|^p \mu(X\setminus N)} \mathrm d\mu \\ & \leq \int_{X\setminus N} \frac{M^p}{(M/a)^p \mu(X\setminus N)} \mathrm d\mu \\ & \leq \mu(X\setminus N) \frac{M^p}{(M/a)^p \mu(X\setminus N)} \\ & = a^p \end{aligned}



So \|f\|_p\leq a.




Then we have
\frac{\|M_g(f)\|_p}{\|f\|_p} = \frac{M}{\|f\|_p} \geq \frac{M}{a}.



Because a can be arbitrarily close to 1, this completes the proof.



Note that because of the \sup in the definition of the operator norm, you do not need to show that

\frac{\|M_g(f)\|_p}{\|f\|_p} = M,
only that you can find functions f such that
\frac{\|M_g(f)\|_p}{\|f\|_p}
is arbitrarily close to M.


algebra precalculus - Why does k^2+4k+4 become (k+2)^2?



I'm studying maths after a lot of years without having a look at it.



Currently I'm doing proof by induction, I have a doubt when doing an exercise.



If you have a look at this video at minute 5:15 , [k^2+4k+4] becomes (k+2)^2 and I don't understand why.




Is there any rule or theorem that explains this?


Answer



depending on the area of rectangle



enter image description here



k^2+2k+2k+4=(k+2)(k+2)
(k^2+4k+4)=(k+2)^2


sequences and series - Why sum_{k=0}^{infty} q^k sum is frac{1}{1-q} when |q| < 1




Why is the infinite sum of \sum_{k=0}^{\infty} q^k = \frac{1}{1-q} when |q| < 1




I don't understand how the \frac{1}{1-q} got calculated. I am not a math expert so I am looking for an easy to understand explanation.


Answer



By definition you have
\sum_{k=0}^{+\infty}q^k=\lim_{n\to+\infty}\underbrace{\sum_{k=0}^{n}q^k}_{=:S_n}
Notice now that (1-q)S_n=(1-q)(1+q+q^2+\dots+q^n)=1-q^{n+1}; so dividing both sides by 1-q (in order to do this, you must be careful only to have 1-q\neq0, i.e. q\neq1) we immediately get
S_n=\frac{1-q^{n+1}}{1-q}.

If you now pass to the limit in the above expression, when |q|<1, it's clear that
S_n\stackrel{n\to+\infty}{\longrightarrow}\frac1{1-q}\;\;,
as requested. To get this last result, you should be confident with limits, and know that \lim_{n\to+\infty}q^n=0 when |q|<1.


calculus - Maclaurin polynomial of tan(x)



The method used to find the Maclaurin polynomial of sin(x), cos(x), and e^x requires finding several derivatives of the function. However, you can only take a couple derivatives of tan(x) before it becomes unbearable to calculate.




Is there a relatively easy way to find the Maclaurin polynomial of tan(x)?



I considered using tan(x)=sin(x)/cos(x) somehow, but I couldn't figure out how.


Answer



Long division of series.



\matrix{ & x + \frac{x^3}{3} + \frac{2 x^5}{15} + \dots \cr 1 - \frac{x^2}{2} + \frac{x^4}{24} + \ldots & ) \overline{x - \frac{x^3}{6} + \frac{x^5}{120} + \dots}\cr & x - \frac{x^3}{2} + \frac{x^5}{24} + \dots\cr & --------\cr & \frac{x^3}{3} - \frac{x^5}{30} + \dots\cr & \frac{x^3}{3} - \frac{x^5}{6} + \dots\cr & ------\cr &\frac{2 x^5}{15} + \dots \cr &\frac{2 x^5}{15} + \dots \cr & ----}


Monday, July 25, 2016

elementary number theory - Prove summations are equal



Prove that:



\sum_{r=1}^{p^n} \frac{p^n}{gcd(p^n,r)} = \sum_{k=0}^{2n} (-1)^k p^{2n-k} = p^{2n} - p^ {2n-1} + p^{2n-2} - ... + p^{2n-2n}



I'm not exactly sure how to do this unless I can say:




Assume \sum_{r=1}^{p^n} \frac{p^n}{gcd(p^n,r)} = \sum_{k=0}^{2n} (-1)^k p^{2n-k}



and show by induction that the sums equal each other and show the inductive step that you can take the expanded sum part with n+1 and get to \sum_{k=0}^{2n+1} (-1)^k p^{2(n+1)-k}



However, I'm not even sure that I am allowed to do this.. Any suggestion in how to prove this? I'm not looking for a whole proof just a push in the direction of how to solve this.


Answer



First note that gcd(p^n,r) for r=\overline{1,p^n} must be of the form p^k with k\in\overline{0,n}, and for each k there is exactly \phi(p^{n-k}) numbers with that property in \overline{1,p^n} so:
\sum_{r=1}^{p^n} \frac{p^n}{gcd(p^n,r)} = \sum_{k=0}^{n} \frac{p^n}{p^k}\phi(p^{n-k}) = 1+\sum_{k=0}^{n-1} \frac{p^n}{p^k}p^{n-k}(1-\frac{1}{p}) = 1+\sum_{k=0}^{n-1} p^{2(n-k)}(1-\frac{1}{p}) = \sum_{k=0}^{2n} (-1)^k p^{2n-k} \square
For the last equality:
\sum_{k=0}^{n-1} p^{2(n-k)}(1-\frac{1}{p}) = \sum_{k=0}^{n-1} (p^{2(n-k)}-p^{2(n-k)-1})= \sum_{k=0}^{n-1} ((-1)^{2(n-k)}p^{2(n-k)}+(-1)^{2(n-k)-1}p^{2(n-k)-1}) = \sum_{i=0}^{2n-1} (-1)^i p^{2n-i}



limits - Show that limlimits_{ntoinfty}frac1{n}sumlimits_{k=1}^{infty}leftlfloorfrac{n}{3^k}rightrfloor=frac{1}{2}




Show that

\lim_{n\to\infty}\frac1n\sum_{k=1}^{\infty}\left\lfloor\dfrac{n}{3^k}\right\rfloor=\frac{1}{2}




I can do right hand.
\sum_{k=1}^{\infty}\left\lfloor\dfrac{n}{3^k}\right\rfloor\le \sum_{k=1}^{\infty}\dfrac{n}{3^k}=\dfrac{n}{2}
But how to solve left hand?


Answer



0\leq \frac{n}{3^k}-\left\lfloor\frac{n}{3^k}\right\rfloor\leq 1
and the number of non-zero terms of the sum is bounded by 1+\log_3(n), hence:




\begin{eqnarray*} \sum_{k=1}^{+\infty}\left\lfloor\frac{n}{3^k}\right\rfloor=\sum_{k=1}^{\left\lceil\log_3(n)\right\rceil}\left\lfloor\frac{n}{3^k}\right\rfloor&\geq& -(1+\log_3(n))+\sum_{k=1}^{\left\lceil\log_3(n)\right\rceil}\frac{n}{3^k}\\&\geq&\frac{n}{2}-(1+\log_3(n))-\sum_{k>\left\lceil \log_3(n)\right\rceil}\frac{n}{3^k}\\&\geq&\frac{n}{2}-2\log(n)\end{eqnarray*}
for any n big enough.


elementary number theory - Find the last two digits of 2^{2156789}

Find the last two digits of 2^{2156789}.




My try:



As 2156789 = 4* 539197 + 1
The unit digit of 2^{2156789} is similar to the unit digit of 2^{4n+1} which is equal to 2.
But I'm unable to find the tens digit. Please help me.

Sunday, July 24, 2016

algebra precalculus - Arithmetic Progression. Find 1st term and common difference

The seventh term of an A.P. is 15 and the sum of the first seven terms is 42. Find the first term and common difference.



How do I find the first term and common difference with only 1 term given?

calculus - How can I solve this question with (

How can I simplify the top equation in the picture to the lower equation in the picture when the condition t≪v_t/g applies. I dont seem to understand the logic behind it :( Thank you for your help!



Ok guys I just noticed i sent the wrong picture sorry. Here we go again
photo

complex analysis - Evaluate P.V. int^{infty}_{0} frac{x^alpha }{x(x+1)} dx where $0 < alpha


Evaluate
P.V. \int^{\infty}_{0} \frac{x^\alpha }{x(x+1)} dx where 0 < \alpha <1



Thm Let P and Q be polynomials of degree m and n,respectively, where n \geq m+2. If Q(x)\neq 0. for Q has a zero of order at most 1 at the origin and f(z)= \frac{z^\alpha P(z)}{Q(z)}, where 0 < \alpha <1 then P.V, \int^{\infty}_0 \frac{x^ \alpha P(x)}{Q(x)} dx= \frac{2 \pi i}{1- e^{i \alpha 2 \pi }} \sum^{k}_{j=1} Res [f,z_j] where z_1,z_2 ,\dots , z_k are the nonzero poles of \frac{P}{Q}



Attempt


Got that P(x)=1 where its degree m=1 and q(x)=x(x+1) its degree is n=1 so it is not the case that n \geq m+2 because 2 \geq 1+2


Answer



We assume 0<\alpha<1. We have




P.V. \int^{\infty}_{0} \frac{x^\alpha }{x(x+1)} dx =\frac{\pi}{\sin(\alpha \pi)}.



Hint. One may prove that \begin{align} & \int^{\infty}_{0} \frac{x^\alpha }{x(x+1)} dx \\\\&=\int^1_{0} \frac{x^\alpha }{x(x+1)} dx+\int^{\infty}_{1} \frac{x^\alpha }{x(x+1)} dx \\\\&=\int^1_{0} \frac{x^{\alpha-1} }{x+1} dx+\int^0_{1} \frac{x^{\alpha-1} }{1+\frac1x}\cdot \left(- \frac{dx}{x^2}\right) \\\\&=\int^1_{0} \frac{x^{\alpha-1} }{1+x} dx+\int^1_{0} \frac{x^{-\alpha} }{1+x}dx \\\\&=\int^1_{0} \frac{x^{\alpha-1} (1-x)}{1-x^2} dx+\int^1_{0} \frac{x^{-\alpha}(1-x) }{1-x^2}dx \\\\&=\int^1_{0} \frac{x^{\alpha-1} (1-x)}{1-x^2} dx+\int^1_{0} \frac{x^{-\alpha}(1-x)}{1-x^2}dx \\\\&=\frac12\psi\left(\frac{\alpha+1}2\right)-\frac12\psi\left(\frac{\alpha}2\right)+\frac12\psi\left(1-\frac{\alpha}2\right)-\frac12\psi\left(\frac{1-\alpha}2\right) \\\\&=\frac{\pi}{\sin(\alpha \pi)} \end{align} where we have used the classic integral representation of the digamma function \int^1_{0} \frac{1-t^{a-1}}{1-t} dt=\psi(a)+\gamma, \quad a>-1,\tag 1 and the properties \psi(a+1)-\psi(a)=\frac1a,\qquad \psi(a)-\psi(1-a)=-\pi\cot(a\pi).


Saturday, July 23, 2016

elementary number theory - Exponent of p in the prime factorization of n!



Exponent of p in the prime factorization of n! is given by \large \sum \limits_{i=1}^{\lfloor\log_p n \rfloor } \left\lfloor \dfrac{n}{p^i}\right\rfloor .
Can this sum be simplified further to some direct expression so that the number of calculations are reduced?


Answer



yes:




\frac{N-\sigma_p(N)}{p-1}
where \sigma_p(N) is the sum of digits in the p-ary expression of N


abstract algebra - Functions over R such that f(xy) = f(x)f(y)







I can think of three functions that satisfy the condition f(xy) = f(x)f(y) for all x, y, namely





  • f(x) = x

  • f(x) = 0

  • f(x) = 1



Are there more?



And is there a good way to prove that such a list is exhaustive (once expanded to include any other examples that I haven't thought of)?

Friday, July 22, 2016

Bracket of Lie algebra-valued differential form



In this wikipedia article:
https://en.wikipedia.org/wiki/Lie_algebra-valued_differential_form
the bracket of Lie algebra-valued forms is defined. At one point it mentions that it is the bilinear product [\cdot , \cdot] on \Omega^* ( \mathfrak{g}) such that,

\begin{equation} [(g_1 \otimes \alpha) , (g_2 \otimes \beta)] = [g_1, g_2] \otimes (\alpha \wedge \beta) \end{equation}
for all g_1, g_2 \in \mathfrak{g} and all \alpha, \beta \in \Omega^*(M).
From this expression it looks like for an odd form \alpha, [\alpha , \alpha] = 0 since the second term is 0. But this is false according to the definition
\begin{equation} [\alpha, \beta](X_1,\dots,X_{p+q}) := \sum_{\sigma \in S_{p+q}} \text{sgn}(\sigma) \left[\alpha(X_{\sigma(1)},\dots,X_{\sigma(p)}),\beta(X_{\sigma(p+1)},\dots,X_{\sigma(p+q)})\right] \end{equation}
What am I missing?


Answer



It's true that the Lie bracket of things of the form g \otimes \alpha with themselves are always zero. (This is really no more than the statement that [g,g]=0, as you note.)




But most Lie algebra-valued forms do not look like this. They're sums of such terms. If g_i is a basis of the Lie algebra, then a Lie algebra valued form actually looks like \sum g_i \otimes \alpha_i. You can see how the brackets of such terms can fail to be zero. (Pick some nice 2-dinensional Lie algebra to play with.)



So things are only interesting when you're working with non-pure tensors.


integration - A Complex approach to sine integral

this integral:




\int_0^{+\infty}\frac{\sin x}{x}\text{d}x=\frac{\pi}{2}



is very famous and had been discussed in the past days in this forum. and I have learned some elegant way to computer it. for example: using the identity:
\int_0^{+\infty}e^{-xy}\sin x\text{d}x=\frac{1}{1+y^2} and \int_0^{\infty}\int_0^{\infty}e^{-xy}\sin x\text{d}y\text{d}x and Fubini theorem. the link is here:Post concern with sine integral



In this post, I want to discuss another way to computer it. since\int_0^{+\infty}\frac{\sin x}{x}\text{d}x=\frac{1}{2i}\int_{-\infty}^{+\infty}\frac{e^{ix}-1}{x}\text{d}x
this fact inspire me to consider the complex integral:\int_{\Gamma}\frac{e^{iz}-1}{z}\text{d}z
Integral path

and \Gamma is the red path in the above figure, with counter-clockwise orientation, by Cauchy's theorem, we have

\int_{\Gamma}\frac{e^{iz}-1}{z}\text{d}z=0 the above integral can be written as:\int_{-R}^{-\epsilon}\frac{e^{ix}-1}{x}\text{d}x+\int_{\Gamma_{\epsilon}}\frac{e^{iz}-1}{z}\text{d}z+\int_{\epsilon}^{R}\frac{e^{ix}-1}{x}\text{d}x+\int_{\Gamma_{R}}\frac{e^{iz}-1}{z}\text{d}z
Let R\rightarrow +\infty and \epsilon \rightarrow 0, we have:
\int_{-R}^{-\epsilon}\frac{e^{ix}-1}{x}\text{d}x+\int_{\epsilon}^{R}\frac{e^{ix}-1}{x}\text{d}x \rightarrow \int_{-\infty}^{+\infty}\frac{e^{ix}-1}{x}\text{d}x=2i\int_0^{+\infty}\frac{\sin x}{x}\text{d}x
and \int_{\Gamma_{\epsilon}}\frac{e^{iz}-1}{z}\text{d}z=\int_\pi^0\frac{e^{i\epsilon e^{i\theta}}-1}{\epsilon e^{i\theta}}i\epsilon e^{i\theta}\text{d}\theta=i\int_\pi^0(\cos(\epsilon e^{i\theta})+i\sin(\epsilon e^{i\theta})-1)\text{d}\theta \rightarrow 0 as \epsilon \rightarrow 0

so I am expecting that:\int_{\Gamma_{R}}\frac{e^{iz}-1}{z}\text{d}z=-i\pi when R \rightarrow +\infty
but I can't find it. Could you help me? Thanks very much.

elementary number theory - Compute largest integer power of 6 that divides 73!



I am looking to compute the largest integer power of 6 that divides 73!



If it was something smaller, like 6! or even 7!, I could just use trial division on powers of 6. However, 73! has 106 decimal digits, and thus trial division isn't optimal.



Is there a smarter way to approach this problem?


Answer




HINT: There are \lfloor73/3\rfloor=24 numbers divisible by 3, \lfloor73/9\rfloor=8, numbers divisible by 9, \lfloor73/27\rfloor=2 numbers divisible by 27 in the set [1,73]\cap\mathbb{N}. It should be easy now to obtain that the answer is 34 (with the value 6^{34}).


trigonometry - Proving trigonometric identity cos^6A+sin^6A=1-3 sin^2 Acos^2A

Show that





\cos^6A+\sin^6A=1-3 \sin^2 A\cos^2A




Starting from the left hand side (LHS)



\begin{align} \text{LHS} &=(\cos^2A)^3+(\sin^2A)^3 \\ &=(\cos^2A+\sin^2A)(\cos^4A-\cos^2A\sin^2A+\sin^4A)\\ &=\cos^4A-\cos^2A\sin^2A+\sin^4A \end{align}




Can anyone help me to continue from here

elementary number theory - What is the largest power of 2 that divides 200!/100!.





What is the largest power of 2 that divides 200!/100!.



No use of calculator is allowed.
I had proceeded in a brute force method which i know regret..
I would like to know your methods.



Answer



Find highest power of 2 in 200! and 100!, using Legendre's formula



In 200!, highest power of 2



=\lfloor 200/2 \rfloor +\lfloor 200/4 \rfloor +\lfloor 200/8 \rfloor +\lfloor 200/16 \rfloor +\lfloor 200/32 \rfloor +\lfloor 200/64 \rfloor +\lfloor 200/128 \rfloor



=100+50+25+12+6+3+1=197



In 100!, highest power of 2




=\lfloor 100/2 \rfloor +\lfloor 100/4 \rfloor +\lfloor 100/8 \rfloor +\lfloor 100/16 \rfloor +\lfloor 100/32 \rfloor +\lfloor 100/64 \rfloor



= 50 + 25+12+6+3+1 =97



Now, just subtract the two, and we get 100 as the answer.


Thursday, July 21, 2016

calculus - Evaluating the limit lim_{ntoinfty} left(frac{1}{n^2}+frac{2}{n^2}+...+frac{n-1}{n^2}right)




Evaluate the limit \lim_{n\to\infty} \left(\frac{1}{n^2}+\frac{2}{n^2}+...+\frac{n-1}{n^2}\right) .





My work:



I started by computing the first 8 terms of the sequence x_n (0, 0.25, 0.222, 0.1875, 0.16, 0.139, 0.122, 0.1094). From this, I determined that the sequence x_n monotonically decreases to zero as n approaches infinity. which satisfies my first test for series convergance, if \sum_{n=2}^\infty x_n converges, then \lim_{n\to\infty}x_n=0.



Next, I rearranged the equation in an attempt to perform a comparison test. I re-wrote the equation as \sum_{n=2}^\infty (\frac1{n}-\frac1{n^2}). This was to no avail as the only series I could think to compare it to was \frac1n which is always greater than the original series and is divergant, which does not prove convergance to a limit.



The ratio test concluded with \lim_{n\to\infty} \frac{x_{n+1}}{x_n} being equal to 1, which is also inconclusive (I can show my work if need be, but that would be a little tedious). I never ran the root test, but I doubt that this would be any help in this case.




I see no other way to compute the limit, so any help would be appreciated!!


Answer



Hint: Use the formula
\sum_{k=1}^n k=\frac{n(n+1)}2.



So,
\lim_{n\to\infty}\frac 1{n^2}+\frac 2{n^2}+\cdots+\frac{n-1}{n^2}=\lim_{n\to\infty}\frac{n(n-1)}{2n^2}.



Can you get it from here?


trigonometry - How is it solved: sin(x) + sin(3x) + sin(5x) +dotsb + sin(2n - 1)x =

The problem is to find the summary of this statement:
\sin(x) + \sin(3x) + \sin(5x) + \dotsb + \sin(2n - 1)x =




I've tried to rewrite all sinuses as complex numbers but it was in vain. I suppose there is much more complicated method to do this. I think it may be solved somehow using complex numbers or progressoins.



How it's solved using complex numbers or even without them if possible?



Thanks a lot.

real analysis - Find a one-to-one correspondence between [0,1] and (0,1).








Establish a one-to-one correspondence between the closed interval [0,1] and the open interval (0,1).



this is a problem in real analysis.



Thanks

sequences and series - How to prove sumlimits_{k=0}^{N} frac{(-1)^k {N choose k}}{(k+1)^2} = frac{1}{N+1} sumlimits_{n=1}^{N+1} frac{1}{n}

In the process of proving a more complicated relation, I've come across the following equality that I'm having trouble proving:



\sum\limits_{k=0}^{N} \frac{(-1)^k {N \choose k}}{(k+1)^2} = \frac{1}{N+1} \sum\limits_{n=1}^{N+1} \frac{1}{n}



I was already able to prove the following similar equality:



\sum\limits_{k=0}^N \frac{(-1)^k {N \choose k}}{k+1} = \frac{1}{N + 1}



but I'm unsure how to proceed with the first one. I assume it has something to do with the fact that every term in the left hand side of the first equality is \frac{1}{k+1} times a term in the left hand side of the second equality. Any help would be greatly appreciated.

Modular arithmetic and linear congruences



Assuming a linear congruence:



ax\equiv b \pmod m



It's safe to say that one solution would be:



x\equiv ba^{-1} \pmod m




Now, the first condition i memorized for a number a to have an inverse mod(m) was:



\gcd(a,m) = 1



Which stems from the fact (and correct me here) that a linear congruence has solution if that gcd divides b. Since on an inverse calculation we have:



ax\equiv 1 \pmod m



The only number that divides one is one itself, so it makes sense.




Now comes the part that annoys me most:



"If the condition that tells us that there is an inverse mod (m) for a says that \gcd(a,m)=1, then how come linear congruences where the \gcd(a,m) \ne 1 have solution? Why do we say that a linear congruence where \gcd(a,m) = d > 1 has d solutions? If you can't invert a, then you can't do this:"



ax\equiv b \pmod m \implies x\equiv ba^{-1} \pmod m



Please help on this one. It's kinda tormenting me :(


Answer



The long and the short of it is: ax \equiv b \pmod m has solutions iff \gcd(a,m) divides b.




As you said, if \gcd(a,m) = 1, then we can multiply by the inverse of a to get our (unique!) solution.



But if \gcd(a,m) = d > 1, we still have a chance at finding solutions, even though there is no inverse of a mod m.






Assume d \mid b.



Then there are integers a', m', b' such that a = da', b = db', and m = dm'.
ax \equiv b \pmod m \iff a'x \equiv b' \pmod{m'}




But since d was the GCD of a and m, we know \gcd(a', m') = 1, and we can construct a solution mod m'. For notation's sake, let c be an integer such that ca' \equiv 1 \pmod {m'}.
a'x \equiv b' \pmod{m'} \iff x \equiv b' c \pmod {m'}



Now we can "lift" our solution mod m' to many solutions mod m.:
x \equiv b' c \pmod {m'} \iff \exists k \quad x \equiv b'c + km' \pmod m






Say there is a solution to the congruence. Then ax \equiv b \pmod m implies that for some k, ax + km = b. But if d divides a and m, it must also divide b.







So it is a necessary and sufficient condition.


derivatives - Why is this incorrect (regarding differentiating the natural log)?



We must differentiate the following:



[f(x) = \ln (3x^2 +3)]\space '



Why is this incorrect? I am just using the product rule:



[f(x) = \ln (3x^2 +3)]\space ' = \dfrac{1}{x} \times (3x^2 + 3) + \ln(6x) = \dfrac{3x^2 +3}{x} + \ln(6x)




My book gives the following answer:



\dfrac{6x}{3x^2 +3}


Answer



There is no product here; you should be using the chain rule.



The start of your answer makes it look like you were differentiating \log(x) \cdot (3x^2 + 3) instead of the given function, but the latter part of your attempt clarifies that you are just getting tangled up.



(Also, it's a bit strange that your book didn't reduce its final answer, but it's still correct.)




More precisely:



[f(g(x))]' = f'(g(x)) \cdot g'(x).



In this case,
[\log(3x^2 + 3)]' = \frac{1}{3x^2 +3} \cdot (3x^2 + 3)' = \frac{6x}{3x^2 + 3} as your book suggests. Of course, we could divide top and bottom by 3 to simplify our answer to:



\frac{2x}{x^2 + 1}




Going back to the original function, note that \log(3x^2 + 3x) = \log(3) + \log(x^2 + x). If you now differentiate the function in this form, the derivative of the constant term \log(3) will be 0, and you will end up with the same answer as above (already in simplified form).


divisibility - Prove by induction that forall n inmathbb N, 3mid4^{n}-1.



Prove by induction that \forall n \in\mathbb N, 3\mid4^{n}-1.




1) For n = 1 the statement is obviously true.



2) Now what about n+1? I was thinking of writing 4^{n}-1 as 2^{2n}-1 and then 4^{n+1}-1 = 2^{2n+2}-1 but that got me nowhere.


Answer



Hint: \;4^{n+1}-1=4 \cdot 4^n-1=(3+1)\cdot 4^n-1=3\cdot 4^n + 4^n-1\,.



The non-induction proofs are more direct, though:




  • \;4^n-1 =(4-1)(4^{n-1}+4^{n-2}+\ldots+1)



  • \;4^n-1=(3+1)^n - 1 = \ldots



Wednesday, July 20, 2016

integration - Compute an integral about error function int_{-infty}^{infty} frac{e^{-k^2}}{1-k} mathrm{d}k



There is an integral
\mathcal{P}\int_{-\infty}^{\infty} \frac{e^{-k^2}}{1-k} \mathrm{d}k
where \mathcal{P} means Cauchy principal value.



Mathematica gives the result (as the screent shot shows)
\mathcal{P}\int_{-\infty}^{\infty} \frac{e^{-k^2}}{1-k} \mathrm{d}k = \frac{\pi}{e}\mathrm{erfi}(1) = \frac{\pi}{e}\cdot \frac{2}{\sqrt{\pi}} \int_0^1 e^{u^2}\mathrm{d}u
Mathematica screen shot




where \mathrm{erfi}(x) is imaginary error function define as
\mathrm{erfi}(z) = -\mathrm{i}\cdot\mathrm{erf}(\mathrm{i}z)
\mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^{x} e^{-t^2}\mathrm{d}t
How can we get the right hand side from left hand side?


Answer




For a \in \mathbb{R} define
\begin{align} f(a) &\equiv \mathrm{e}^{a^2} \mathcal{P} \int \limits_{-\infty}^{\infty} \frac{\mathrm{e}^{-k^2}}{a-k} \, \mathrm{d} k = \mathrm{e}^{a^2} \mathcal{P} \int \limits_{-\infty}^{\infty} \frac{\mathrm{e}^{-(x-a)^2}}{x} \, \mathrm{d} x= \lim_{\varepsilon \to 0^+} \left[\int \limits_{-\infty}^{-\varepsilon} \frac{\mathrm{e}^{-x^2 + 2 a x}}{x} \, \mathrm{d} x + \int \limits_\varepsilon^\infty \frac{\mathrm{e}^{-x^2 + 2 a x}}{x} \, \mathrm{d} x\right] \\ &= \lim_{\varepsilon \to 0^+} \int \limits_\varepsilon^\infty \frac{\mathrm{e}^{-x^2 + 2 a x} - \mathrm{e}^{-x^2 - 2 a x}}{x} \, \mathrm{d} x = \int \limits_0^\infty \frac{\mathrm{e}^{-x^2 + 2 a x} - \mathrm{e}^{-x^2 - 2 a x}}{x} \, \mathrm{d} x \, . \end{align}
In the last step we have used that the integrand is in fact an analytic function (with value 4a at the origin). The usual arguments show that f is analytic as well and we can differentiate under the integral sign to obtain
f'(a) = 2 \int \limits_0^\infty \left[\mathrm{e}^{-x^2 + 2 a x} + \mathrm{e}^{-x^2 - 2 a x}\right]\, \mathrm{d} x = 2 \int \limits_{-\infty}^\infty \mathrm{e}^{-x^2 + 2 a x}\, \mathrm{d} x = 2 \sqrt{\pi} \, \mathrm{e}^{a^2} \, , \, a \in \mathbb{R} \, .
Since f(0) = 0,
f(a) = 2 \sqrt{\pi} \int \limits_0^a \mathrm{e}^{t^2} \, \mathrm{d} t = \pi \operatorname{erfi}(a)
follows for a \in \mathbb{R}. This implies

\mathcal{P} \int \limits_{-\infty}^{\infty} \frac{\mathrm{e}^{-k^2}}{a-k} \, \mathrm{d} k = \pi \mathrm{e}^{-a^2} \operatorname{erfi}(a) \, , \, a \in \mathbb{R} \, .


linear algebra - Help Determinant Binary Matrix




I was messing around with some matrices and found the following result.





Let A_n be the (2n) \times (2n) matrix consisting of elements a_{ij} = \begin{cases} 1 & \text{if } (i,j) \leq (n,n) \text{ and } i \neq j \\ 1 & \text{if } (i,j) > (n,n) \text{ and } i \neq j \\ 0 & \text{otherwise}. \end{cases}
Then, the determinant of A_n is given by \text{det}(A_n) = (n-1)^2.




Example: A_2 = \begin{pmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix}, A_3 = \begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ \end{pmatrix}, with det(A_2) and det(A_3) being 1 and 4 respectively. I was wondering if anybody could prove this statement for me.


Answer



Your matrix A_n has the block diagonal structure



A_n = \begin{pmatrix} B_n & 0 \\ 0 & B_n \end{pmatrix}




where B_n \in M_n(\mathbb{F}) is the matrix which has diagonal entries zero and all other entries 1. Hence, \det(A_n) = \det(B_n)^2 so it is enough to calculate \det(B_n). To do that, let C_n be the matrix in which all the entries are 1 (so B_n = C_n - I_n).



The matrix C_n is a rank-one matrix so we can find it's eigenvalues easily. Let us assume for simplicity that n \neq 0 in \mathbb{F}. Then C_n has an n - 1 dimensional kernel and (1,\dots,1)^T is an eigenvalue of C_n associated to the eigenvalue n. From here we see that the characteristic polynomial of C_n must be \det(\lambda I - C_n) = \lambda^{n-1}(\lambda - n) and hence
\det(B_n) = \det(C_n - I_n) = (-1)^n \det(I_n - C_n) = (-1)^{n} 1^{n-1}(1 - n) = (-1)^n(1 - n) = (-1)^{n-1}(n-1).



In fact this formula works even if n = 0 in \mathbb{F} because in this case, A^2 = 0 so A is nilpotent and \det(C_n - \lambda I) = \lambda^n.


trigonometry - How is Asintheta +Bcostheta = Csin(theta + phi) derived?




I have come across this trig identity and I want to understand how it was derived. I have never seen it before, nor have I seen it in any of the online resources including the many trig identity cheat sheets that can be found on the internet.



A\cdot\sin(\theta) + B\cdot\cos(\theta) = C\cdot\sin(\theta + \Phi)



Where C = \pm \sqrt{A^2+B^2}



\Phi = \arctan(\frac{B}{A})



I can see that Pythagorean theorem is somehow used here because of the C equivalency, but I do not understand how the equation was derived.




I tried applying the sum of two angles identity of sine i.e. \sin(a \pm b) = \sin(a)\cdot\cos(b) + \cos(a)\cdot\sin(b)



But I am unsure what the next step is, in order to properly understand this identity.



Where does it come from? Is it a normal identity that mathematicians should have memorized?


Answer



The trigonometric angle identity \sin(\alpha + \beta) = \sin\alpha \cos\beta + \cos\alpha \sin\beta is exactly what you need. Note that A^2 + B^2 = C^2, or (A/C)^2 + (B/C)^2 = 1. Thus there exists an angle \phi such that \cos\phi = A/C and \sin\phi = B/C. Then we immediately obtain from the aforementioned identity \sin(\theta+\phi) = \frac{A}{C}\sin \theta + \frac{B}{C}\cos\theta, with the choice \alpha = \theta; after which multiplying both sides by C gives the claimed result.



Note that \phi = \tan^{-1} \frac{B}{A}, not \tan^{-1} \frac{B}{C}.


Tuesday, July 19, 2016

calculus - How to evaluate intsin ^3 xcos^3 x:dx without a reduction formula?




We have the integral \displaystyle\int \sin ^3 x \cos^3 x \:dx. You can do this using the reduction formula, but I wonder if there's another (perhaps simpler) way to do this, like for example with a substitution?


Answer



Hint. You may write
\sin^3 x \cos^3 x= \sin x(1 - \cos^2x)\cos^3 x=\sin x(\cos^3x - \cos^5x)


real analysis - Open maps which are not continuous

What is an example of an open map (0,1) \to \mathbb{R} which is not continuous? Is it even possible for one to exist? What about in higher dimensions? The simplest example I've been able to think of is the map e^{1/z} from \mathbb{C} to \mathbb{C} (filled in to be 0 at 0). There must be a simpler example, using the usual Euclidean topology, right?

algebra precalculus - How do I get from log F = log G + log m - log(1/M) - 2 log r to a solution withoug logs?



I've been self-studying from Stroud & Booth's excellent "Engineering Mathematics", and am currently on the "Algebra" section. I understand everything pretty well, except when it comes to the problems then I am asked to express an equation that uses logs, but without logs, as in:



\log{F} = \log{G} + \log{m} - \log\frac{1}{M} - 2\log{r}




They don't cover the mechanics of doing things like these very well, and only have an example or two, which I "kinda-sorta" barely understood.



Can anyone point me in the right direction with this and explain how these are solved?


Answer



Using some rules of logarithms you get \quad-\log\dfrac{1}{M}=+\log M and -2\log r=-\log r^2=+\log \dfrac{1}{r^2}



So you have



\begin{eqnarray} \log{F} &=& \log{G} + \log{m} + \log M + \log{\dfrac{1}{r^2}}\\ \log{F} &=& \log{\left(GmM\cdot\dfrac{1}{r^2}\right)}\\ \log{F} &=& \log{\frac{mMG}{r^2}}\\ F &=&\frac{mMG}{r^2} \end{eqnarray}



The last step hinges upon the fact that logarithm functions are one-to-one functions. If a function f is one-to-one, then f(a)=f(b) if and only if a=b. Since \log is a one-to-one function, it follows that \log A=\log B if and only if A=B.



ADDENDUM: Here are a few rules of logarithms which you may need to review





  1. \log(AB)=\log A+\log B

  2. \log\left(\dfrac{A}{B}\right)=\log A-\log B

  3. \log\left(A^n\right)=n\log A

  4. \log(1)=0



Notice that from (2) and (4) you get that \log\left(\dfrac{1}{B}\right)=\log 1-\log B=-\log B


limits - Prove that lim limits_{n to infty} frac{x^n}{n!} = 0, x in Bbb R.

Why is



\lim_{n \to \infty} \frac{2^n}{n!}=0\text{ ?}




Can we generalize it to any exponent x \in \Bbb R? This is to say, is



\lim_{n \to \infty} \frac{x^n}{n!}=0\text{ ?}




This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.


and here: List of abstract duplicates.

real analysis - How discontinuous can a derivative be?



There is a well-known result in elementary analysis due to Darboux which says if f is a differentiable function then f' satisfies the intermediate value property. To my knowledge, not many "highly" discontinuous Darboux functions are known--the only one I am aware of being the Conway base 13 function--and few (none?) of these are derivatives of differentiable functions. In fact they generally cannot be since an application of Baire's theorem gives that the set of continuity points of the derivative is dense G_\delta.




Is it known how sharp that last result is? Are there known Darboux functions which are derivatives and are discontinuous on "large" sets in some appropriate sense?


Answer



What follows is taken (mostly) from more extensive discussions in the following sci.math posts:



http://groups.google.com/group/sci.math/msg/814be41b1ea8c024 [23 January 2000]



http://groups.google.com/group/sci.math/msg/3ea26975d010711f [6 November 2006]



http://groups.google.com/group/sci.math/msg/05dbc0ee4c69898e [20 December 2006]




Note: The term interval is restricted to nondegenerate intervals (i.e. intervals containing more than one point).



The continuity set of a derivative on an open interval J is dense in J. In fact, the continuity set has cardinality c in every subinterval of J. On the other hand, the discontinuity set D of a derivative can have the following properties:




  1. D can be dense in \mathbb R.


  2. D can have cardinality c in every interval.


  3. D can have positive measure. (Hence, the function can fail to be Riemann integrable.)


  4. D can have positive measure in every interval.



  5. D can have full measure in every interval (i.e. measure zero complement).


  6. D can have a Hausdorff dimension zero complement.


  7. D can have an h-Hausdorff measure zero complement for any specified Hausdorff measure function h.




More precisely, a subset D of \mathbb R can be the discontinuity set for some derivative if and only if D is an F_{\sigma} first category (i.e. an F_{\sigma} meager) subset of \mathbb R.



This characterization of the discontinuity set of a derivative can be found in the following references: Benedetto [1] (Chapter 1.3.2, Proposition, 1.10, p. 30); Bruckner [2] (Chapter 3, Section 2, Theorem 2.1, p. 34); Bruckner/Leonard [3] (Theorem at bottom of p. 27); Goffman [5] (Chapter 9, Exercise 2.3, p. 120 states the result); Klippert/Williams [7].



Regarding this characterization of the discontinuity set of a derivative, Bruckner and Leonard [3] (bottom of p. 27) wrote the following in 1966: Although we imagine that this theorem is known, we have been unable to find a reference. I have found the result stated in Goffman's 1953 text [5], but nowhere else prior to 1966 (including Goffman's Ph.D. Dissertation).




Interestingly, in a certain sense most derivatives have the property that D is large in all of the ways listed above (#1 through #7).



In 1977 Cliff Weil [8] published a proof that, in the space of derivatives with the sup norm, all but a first category set of such functions are discontinuous almost everywhere (in the sense of Lebesgue measure). When Weil's result is paired with the fact that derivatives (being Baire 1 functions) are continuous almost everywhere in the sense of Baire category, we get the following:



(A) Every derivative is continuous at the Baire-typical point.



(B) The Baire-typical derivative is not continuous at the Lebesgue-typical point.



Note that Weil's result is stronger than simply saying that the Baire-typical derivative fails to be Riemann integrable (i.e. D has positive Lebesgue measure), or even stronger than saying that the Baire-typical derivative fails to be Riemann integrable on every interval. Note also that, for each of these Baire-typical derivatives, \{D, \; {\mathbb R} - D\} gives a partition of \mathbb R into a first category set and a Lebesgue measure zero set.




In 1984 Bruckner/Petruska [4] (Theorem 2.4) strengthened Weil's result by proving the following: Given any finite Borel measure \mu, the Baire-typical derivative is such that the set D is the complement of a set that has \mu-measure zero.



In 1993 Kirchheim [5] strengthened Weil's result by proving the following: Given any Hausdorff measure function h, the Baire-typical derivative is such that the set D is the complement of a set that has Hausdorff h-measure zero.



[1] John J. Benedetto, Real Variable and Integration With Historical Notes, Mathematische Leitfäden. Stuttgart: B. G. Teubne, 1976, 278 pages. [MR 58 #28328; Zbl 336.26001]



[2] Andrew M. Bruckner, Differentiation of Real Functions, 2nd edition, CRM Monograph Series #5, American Mathematical Society, 1994, xii + 195 pages. [The first edition was published in 1978 as Springer-Verlag's Lecture Notes in Mathematics #659. The second edition is essentially unchanged from the first edition with the exception of a new chapter on recent developments (23 pages) and 94 additional bibliographic items.] [MR 94m:26001; Zbl 796.26001]



[3] Andrew M. Bruckner and John L. Leonard, Derivatives, American Mathematical Monthly 73 #4 (April 1966) [Part II: Papers in Analysis, Herbert Ellsworth Slaught Memorial Papers #11], 24-56. [MR 33 #5797; Zbl 138.27805]




[4] Andrew M. Bruckner and György Petruska, Some typical results on bounded Baire 1 functions, Acta Mathematica Hungarica 43 (1984), 325-333. [MR 85h:26004; Zbl 542.26004]



[5] Casper Goffman, Real Functions, Prindle, Weber & Schmidt, 1953/1967, x + 261 pages. [MR 14,855e; Zbl 53.22502]



[6] Bernd Kirchheim, Some further typical results on bounded Baire one functions, Acta Mathematica Hungarica 62 (1993), 119-129. [94k:26008; Zbl 786.26002]



[7] John Clayton Klippert and Geoffrey Williams, On the existence of a derivative continuous on a G_{\delta}, International Journal of Mathematical Education in Science and Technology 35 (2004), 91-99.



[8] Clifford Weil, The space of bounded derivatives, Real Analysis Exchange 3 (1977-78), 38-41. [Zbl 377.26005]



calculus - A Proof for a Sequence Convergence

Here's my question to prove:




Define a_n to be a sequence such that:




a_1=\frac{3}{2}
a_{n+1}=3-\frac{2}{a_n}





Prove that a_n is convergent and calculate its limit.




Solution




Prove by induction that a_n is monotonic increasing:






  • For n=2, a_2=3-\frac{4}{3}=\frac{5}{3}>\frac{1}{2}

  • Assume that a_n>a_{n-1}

  • For n=k+1: 3-\frac{2}{a_n} - (3-\frac{2}{a_{n-1}})=-\frac{2}{a_n}+\frac{2}{a_{n-1}}>0, since $a_{n-1}\frac{2}{a_n}$


  • Therefore, the sequence is monotonic increasing.





Prove that 2 is an upper bound of the sequence. Therefore, it is monotonic increasing and bounded, thus convergent (induction).





Now I think that \lim_\limits{n\to\infty} a_n=2, and I want to prove it with the squeeze theorem.



Is my solution, correct?



Is there a way to find supremum here?



Thanks,



Alan

Monday, July 18, 2016

calculus - How can it be proven that the Gaussian function has no "trivial" primitives?







By trivial primitives I mean expressions involving logarithms, polynomials, and trigonometrical or exponential functions.

summation - Why does he need to first prove the sum of the n integers and then proceed to prove the concept of arithmetical progression?



I'm reading What is Mathematics, on page 12 (arithmetic progression), he gives one example of mathematical induction while trying to prove the concept of arithmetic progression. There's something weird here: he starts by proving that the sum of the first n integers is equal to \frac{n(n+1)}{2} by giving this equation:



1+2+3+\cdots+n=\frac{n(n+1)}{2}.




Then he adds (r+1) to both sides:



1+2+3+\cdots+n+(r+1)=\frac{n(n+1)}{2}+(r+1).



Then he solves it:



\frac{(r+1)(r+2)}{2}



Now it seems he's going to prove the arithmetical progression: He says that this can be ordinarily shown by writing the sum 1+2+3+\cdots+n in two forms:




S_n=1+2+\cdots+(n-1)+n



And:



S_n=n+(n-1)+\cdots+2+1



And he states that on adding, we see that each pair of numbers in the same column yields the sum n+1 and, since there are n columsn in all, it follows that:



2S_n=n(n+1).




I can't understand why he needs to prove the sum of the first n integers first. Can you help me?



Thanks in advance.



EDIT: I've found a copy of the book on scribd, you can check it here. This link will get you in the page I'm in.



EDIT:
I kinda understand the proofs presented in the book now, but I can't see how they are connected to produce a proof for arithmetic progression, I've read the wikipedia article about arithmetic progression and this a_n = a_m + (n - m)d (or at least something similar) would be more plausible as a proof to arithmetic progression - what you think?


Answer



He is giving two different proofs, one by a formal induction, and the other a more intuitive one. Good idea, two proofs makes the result twice as true! More seriously, he is probably taking this opportunity to illustrate proof by induction.




It is important to know the structure of a proof by induction. In order to show that a result holds for all positive integers n one shows (i) that the result holds when n=1 and (ii) that for any r, if the result holds when n=r, then it holds when n=r+1.



(i) is called the base step and (ii) is called the induction step.



Almost always, the induction step is harder than the base step.



Here is how the logic works. By (i), the result holds when r=1. By (ii), because the result holds for n=1, it holds when n=2 (we have taken r=1). But because the result holds for n=2, it holds when n=3 (here we have taken r=2). But because the result holds when n=3, we can argue in the same way that the result holds when n=4. And so on.



In our example, suppose that we know that for a specific r, like r=47, we have

1+2+\cdots+r=\frac{r(r+1)}{2.}
We want to show that this forces the result to hold for the "next" number.
Add (r+1) to both sides. We get
1+2+\cdots +r+(r+1)=\frac{r(r+1)}{2}+(r+1).
Now we do some algebraic manipulation:
\frac{r(r+1)}{2}+(r+1)=\frac{r(r+1)+2(r+1)}{2}=\frac{(r+1)(r+2)}{2},
which is what the formula we are trying to prove predicts when n=r+1. We have taken care of the induction step. The base step is easy. So we have proved that 1+2+\cdots+n=\frac{n(n+1)}{2} for every positive integer n.



Remark: Here is another way to think about the induction, one that I prefer. Suppose that there are positive integers n for which the result is not correct. Call such integers n bad. If there are bad n, there is a smallest bad n. It is easy to see that 1 is not bad.




Let r+1 be the smallest bad n.



Then r is good, meaning that 1+2+\cdots+r=\frac{r(r+1)}{2}. Now we argue as in the main post that 1+2+\cdots +r+(r+1)=\frac{(r+1)(r+2)}{2}. That shows that r+1 is good, contradicting the assumption that it is bad.


Sunday, July 17, 2016

complex analysis - Determining Points of mathbb{C} where f(z) is differentiable


Sorry for the question but i'm partially confused about the result and i'm hoping someone can help. here we go



Stating clearly any result you use, determine the set of points in \mathbb{C} where the following functions are differentiable. f(z) = \left\{ \begin{matrix} \frac{z^4}{|z|^2} ~ \text{if z}\neq 0\\ 0 ~\text{if z} = 0\end{matrix}\right.



so here's where i'm at so far. let z\neq 0, then we use euler's form giving f(z) = \frac{|z|^4(\cos{4\theta}+i\sin{4\theta})}{|z|^2} via De'moivres \implies (subbing |z| = r) f(z) = r^2(\cos{4\theta}+i\sin{4\theta})


From here, we consider f as f: \mathbb{R}^2 \longrightarrow \mathbb{R}^2 where f(u(r,\theta),v(r,\theta)) = u(r,\theta)+i v(r,\theta)



for f to be holomorphic (this is one of the problems i've got already...the phrasing of the question says to determine the points in \mathbb{C} where f is differentiable, and not necessarily holomorphic?)


either way...crack on: for f to be holomorphic, u,v must be Real-differentiable (which they are) and the CR-Equations must be satisfied.


Using u(r,\theta)=r^2 \cos{4\theta}~\&~v(r,\theta) = r^2\sin{4\theta}


we need to satisfy u_{r}= \frac{1}{r}v_{\theta}, ~\&~ v_r = -\frac{1}{r}u_{\theta}


Running the calculations we have u_r = 2r \cos{4\theta} u_{\theta} = -4r^2 \sin{4\theta} v_{r} = 2r \sin{4\theta} v_{\theta} = 4r^2 \cos{4\theta}


substituing into the above values gives u_{r}= \frac{1}{r}v_{\theta} \implies 2r \cos{4\theta} = 4r \cos{4\theta} \implies 1 = 2 and v_{r}= -\frac{1}{r}u_{\theta} \implies 2r \sin{4\theta} = 4r \sin{4\theta} \implies 1 = 2


what have i done wrong? if it's nothing...does this mean that the function isn't differentiable at any point (z \neq 0 )\in \mathbb{C}?


cheers for reading.



i think it may be a bit of a long day on my behalf, and i think i'm just doing something stupid at this point...but tbh i just can't see it right now




If i minus one side from the other and then compare the two i get (r,\theta) = (1/2,\theta) satisfies both equations, otherwise nothing else does as Cos and sin are never zero at the same time.


does this mean that f(z) is holomorphic on the disk of radius 1/2?


Cheers for reading.


Answer



The function is real differentiable, indeed smooth, for z\ne 0, being the quotient of two smooth functions. Now, note that for z\ne 0 we can rewrite f(z) = \frac{z^4}{|z|^2} = \frac{z^4}{z\bar z} = \frac{z^3}{\bar z}; thus, \dfrac{\partial f}{\partial\bar z} \ne 0, and so f is not holomorphic.


Now, what about at z=0? Write down the difference quotient f'(0) = \lim_{h\to 0}\frac{f(h)-f(0)}h = \lim_{h\to 0}\frac{\frac{h^3}{\bar h}}h = \lim_{h\to 0} \frac{h^2}{\bar h}. Note that \left|\frac{h^2}{\bar h}\right| = \left| h\cdot \frac{h}{\bar h}\right| = |h|\left|\frac h{\bar h}\right| = |h| \to 0, and so f'(0)=0 and f is complex differentiable at 0.


sequences and series - Why does sum_{n = 0}^infty frac{n}{2^n} converge to 2?

Apparently,


\sum_{n = 0}^\infty \frac{n}{2^n}


converges to 2. I'm trying to figure out why. I've tried viewing it as a geometric series, but it's not quite a geometric series since the numerator increases by 1 every term.

elementary number theory - A contradiction proof of "If ((n^q)-1) is divisible by p, then show that ,q mid (p-1)".

Let \,p, q\, be prime numbers and \,n\,\in \mathbb N such that (n-1) is not divisible by p. If \,(n^q-1)\, is divisible by p then show that q \mid (p-1).




How can I prove it by contradiction. Let us take (p-1) is not divisible by q then how can I achieve a contradiction to to show that (n^q-1) is not divisible by p.



Please help me to solve it. Thanks in advance.

Saturday, July 16, 2016

find the square root of the complex number 1 + 2i?


Find the square root of the complex number 1 + 2i




I was trying this question many times, but I could not get it and I don't know where to start.



Actually after squaring the 1+ 2i I got -1 + 2i, and I also tried multiplying by 1+i. However I don't know the exact answer.



Thanks in advance

functional equations - Real Analysis Proofs: Additive Functions

I'm new here and could really use some help please:




Let f be an additive function. So for all x,y \in \mathbb{R}, f(x+y) = f(x)+f(y).




  1. Prove that if there are M>0 and a>0 such that if x \in [-a,a], then |f(x)|\leq M, then f has a limit at every x\in \mathbb{R} and \lim_{t\rightarrow x} f(t) = f(x).


  2. Prove that if f has a limit at each x\in \mathbb{R}, then there are M>0 and a>0 such that if x\in [-a,a], then |f(x)| \leq M.




if necessary the proofs should involve the \delta - \varepsilon definition of a limit.







The problem had two previous portions to it that I already know how to do. However, you can reference them to do the posted portions of the problem. Here they are:



(a) Show that for each positive integer n and each real number x, f(nx)=nf(x).



(b) Suppose f is such that there are M>0 and a>0 such that if x\in [−a,a], then |f(x)|\le M. Choose \varepsilon > 0. There is a positive integer N such that M/N < \varepsilon. Show that if $|x-y|

number theory - Concecutive last zeroes in expansion of 100!








In decimal form, the number 100! ends in how many consecutive zeroes. I am thinking of the factorization of 100! but I am stuck. I try to count them and since there are 10, 20, 30,..., 100, there are at least 11 zeros. How should I proceed.

Is there a way to introduce quaternions and octonions in a similar way to how we are typically introduced to complex numbers?

So I've been reading a little bit into ideas around quaternions and octonions. I just read the following explanation that introduces them as what happens when you have complex numbers and you then ask "but what if there was another square root of -1?":



http://www.askamathematician.com/2015/02/q-quaternions-and-octonions-what/



Now I've skim-read different things and seen a few different ways of introducing and explaining how complex numbers, quaternions and octonions relate to each other, however this particular explanation made me curious.




See, when we get introduced to complex numbers, it's not as a "oh what if there was an extra square root to -1" question, but a "there should be a square root to -1 but with only real numbers it's undefined". That is, there's an actual equation {x}^{2} = -1 that you're trying to solve and can't with only real numbers.



My main question then - is there a similar equation or problem where, with only real and complex numbers, you cannot solve it without introducing quaternions? And similarly again for octonions?



Finally, if instead the way the website above introduces these concepts is, in some sense, fundamental, then what exactly is special about the number -1? For instance, why not introduce new number systems based on defining new square roots of some completely different number(s)?

algebra precalculus - Easy way to solve w^2 = -15 + 8i


Solve w^2=−15+8i, where w is complex.





Normally, I would convert this into polar coordinates, but the problem is that is too slow.



What is another alternative?

number theory - Find all intergers such that 2n^2+1 divides n^3+9n-17



Find all intergers such that 2n^2+1 divides n^3+9n-17.



Answer : n=(2 \ and \ 5)




I did it.



As 2n^2+1 divides n^3+9n-17, then 2n^2+1 \leq n^3+9n-17 \implies n \geq 2



So n =2 is solution and doens't exist solution when n<2. How can I do now to find 5 ? Or better, how can you solve this with another good method ?



Thanks


Answer



HINT:




If integer d divides n^3+9n-17,2n^2+1



d must divide 2(n^3+9n-17)-n(2n^2+1)=17n-34



d must divide 17(2n^2+1)-2n(17n-34)=68n+17



d must divide 68n+17-4(17n-34)=153



So the necessary condition is 2n^2+1 must divide 153




$\implies2n^2+1\le153\iff n^2\le76\iff-9

Modular Linear Congruences Puzzle

I'm learning about solving systems of modular linear congruences in my discrete mathematics class. Recently, my teacher posed a puzzle that I can't seem to solve:



These eight small-number triples are not random:


[[1 1 3] [1 1 4] [1 4 3] [1 4 4] [2 1 3] [2 1 4] [2 4 3] [2 4 4]]


They have something to do with the product of the first three odd primes and the fourth power of two.


Find the connection.



From what I can tell, the triples are the cartesian products of [1 2], [1 4], and [3 4]. These add up to the first three odd primes like the teacher wanted. I still can't find a link between the triples and the fourth power of two though. My teacher said it has something to do with modular linear congruences. What am I missing?


This is an example of modular linear congruences:



x \equiv_7 0


x \equiv_{11} 8


x \equiv_{13} 12


Solution: x \equiv_{1001} 987

Friday, July 15, 2016

trigonometry - Alternative of finding theta when sin theta and cos theta are given




For example, we're given a problem in which sin \theta = \sqrt3/2 and cos \theta = -1/2. To find out the angle \theta, I look at the unit circle and I get the answer. However, I was just curious whether there's an alternative to this, any idea? Because when I tried using cos(-\theta) = cos\theta, I get the wrong value of \theta as we've been provided with the value of sin \theta as well...


Answer



\sin\theta=\frac{\sqrt3}2=\sin\frac\pi3\implies\theta=n\pi+(-1)^n\frac\pi3
where n is any integer



Set n=2s+1,(odd) =2s(even) one by one



Again,
\cos\theta=-\frac12=-\cos\frac\pi3=\cos\left(\pi-\frac\pi3\right)

\implies\theta=2m\pi\pm\left(\pi-\frac\pi3\right)
where m is any integer



Check for '+','-' one by one



Observe that the intersection of the above two solutions is \theta=2r\pi+\frac{2\pi}3
where r is any integer


divisibility - Prove that 5mid 8^n - 3^n for n ge 1


I have that 5\mid 8^n - 3^n


The first thing I tried is vía Induction:


It is true for n = 1, then I have to probe that it's true for n = n+1


5 \mid 8(8^n -3^n) 5 \mid 8^{n+1} -8\cdot3^n 5 \mid 3(8^{n+1} -8\cdot3^n) 5 \mid 3\cdot8^{n+1} -8\cdot3^{n+1}


After this, I don't know how to continue. Then I saw an example about a property: (a+b)^n = am + b ^ n with m = a + 2b or the number it represents.



5 \mid 8^n -3^n 5 \mid (5+3)^n -3^n) 5 \mid 5m + 3^n - 3^n) 5 \mid 5m


So, d \mid a only if a = kd. From this I get that 5 \mid 5 m.


My questions:


1) Is the exercise correct?


2) Could it have been resolved via method 1?


Thanks a lot.


Answer



For induction, you have


\begin{align}8^{n+1} - 3^{n+1} &= 8\cdot 8^n - 3\cdot3^n\\&= 3(8^n - 3^n) + 5\cdot8^n\end{align}


Note that the first term must be divisible by 5 because 8^n-3^n is divisie by 5.



real analysis - If fcolon mathbb{R} to mathbb{R} is such that f (x + y) = f (x) f (y) and continuous at 0, then continuous everywhere




Prove that if f\colon\mathbb{R}\to\mathbb{R} is such that f(x+y)=f(x)f(y) for all x,y, and f is continuous at 0, then it is continuous everywhere.




If there exists c \in \mathbb{R} such that f(c) = 0, then
f(x + c) = f(x)f(c) = 0.
As every real number y can be written as y = x + c for some real x, this function is either everywhere zero or nowhere zero. The latter case is the interesting one. So let's consider the case that f is not the constant function f = 0.




To prove continuity in this case, note that for any x \in \mathbb{R}
f(x) = f(x + 0) = f(x)f(0) \implies f(0) = 1.



Continuity at 0 tells us that given any \varepsilon_0 > 0, we can find \delta_0 > 0 such that |x| < \delta_0 implies
|f(x) - 1| < \varepsilon_0.



Okay, so let c \in \mathbb{R} be fixed arbitrarily (recall that f(c) is nonzero). Let \varepsilon > 0. By continuity of f at 0, we can choose \delta > 0 such that
|x - c| < \delta\implies |f(x - c) - 1| < \frac{\varepsilon}{|f(c)|}.



Now notice that for all x such that |x - c| < \delta, we have

\begin{align*} |f(x) - f(c)| &= |f(x - c + c) - f(c)|\\ &= |f(x - c)f(c) - f(c)|\\ &= |f(c)| |f(x - c) - 1|\\ &\lt |f(c)| \frac{\varepsilon}{|f(c)|}\\ &= \varepsilon. \end{align*}
Hence f is continuous at c. Since c was arbitrary, f is continuous on all of \mathbb{R}.



Is my procedure correct?



Answer



One easier thing to do is to notice that f(x)=(f(x/2))^2 so f is positive, and assume that it is never zero, since then the function is identically zero. Then you can define g(x)=\ln f(x) and this function g will satisfy the Cauchy functional equation
g(x+y)=g(x)+g(y)
and the theory for this functional equation is well known, and it is easy to see that g is continuous if and only if it is continuous at 0.


Limit of sequence. with Factorial


Can't find the limit of this sequence : \frac{3^n(2n)!}{n!(2n)^n} tried to solve this using the ratio test buy failed... need little help


Answer



What's the problem with the ratio test?:


\frac{a_{n+1}}{a_n}=\frac{(2n+2)!\color{red}{3^{n+1}}}{(n+1)!(\color{green}{2}(n+1))^{n+1}}\frac{n!(\color{green}{2}n)^n}{(2n)!\color{red}{3^n}}=\frac{(2n+1)\cdot3}{n+1}\left(\frac{n}{n+1}\right)^n\xrightarrow[n\to\infty]{}\frac6e>1



and thus...


Proving some identities in the set of natural numbers without using induction...

I'm not sure how to prove some of the identities without using induction, for example:
1+2+3+...+n=\frac{n(n+1)}{2}
1^2+2^2+...+n^2=\frac{n(n+1)(2n+1)}{6}
1^3+2^3+...+n^3=(\frac{n(n-1)}{2})^2

What my teacher suggested and did for the first example is, take two successive members and sum their squares, then using some transformations get \frac{n(n+1)}{2}, here's what the teacher did:
(k+1)^2-k^2=2k+1
we sum k-s from 1 to n:
\sum_{k=1}^{n}((k+1)^2-k^2)=2\sum_{k=1}^{n}k+\sum_{k=1}^{n}1
2^2-1^2+3^2-2^2+4^2-3^2+...+(n+1)^2-n^2=2\sum_{k=1}^{n}k+n
(n+1)^2-1=2\sum_{k=1}^{n}k+n
n^2+2n-n=2\sum_{k=1}^{n}k
2\sum_{k=1}^{n}k=n^2+n
\sum_{k=1}^{n}k=\frac{n(n+1)}{2}




What is the method my teacher used here? The teacher also suggested that, for example, if we have the sum of squares of successive integers (like in the second example), we should take two successive numbers and sum their cubes, or if we have the sum of cubes, then we take two successive members of the sum and sum their 4th degrees. Is there a name for this method of solving so I could google it and examine it a bit more?

Wednesday, July 13, 2016

number theory - Prove x = sqrt[100]{sqrt{3} + sqrt{2}} + sqrt[100]{sqrt{3} - sqrt{2}} is irrational


Prove x = \sqrt[100]{\sqrt{3} + \sqrt{2}} + \sqrt[100]{\sqrt{3} - \sqrt{2}} is irrational.


I can prove that x is irrational by showing that it's a root of a polynomial with integer coefficients and use rational root theorem to deduce it must be either irrational or an integer and then show it's not an integer, therefore, it must be irrational.


I was wondering what are the other methods to prove x is irrational. I'd be very interested in seeing alternative proofs.


Answer



Let y = \sqrt[100]{\sqrt{3} + \sqrt{2}}. Then x = y + {1 \over y}. Suppose x were some rational number q. Then y^2 - qy + 1 = 0. This means {\mathbb Q}(y) is a field extension of {\mathbb Q} of degree two, and every rational function of y is in this field extension. This includes y^{100} = \sqrt{3} + \sqrt{2}, and y^{-100} = \sqrt{3} - \sqrt{2}. So their sum and difference is also in {\mathbb Q}(y). Hence {\mathbb Q}(\sqrt{2},\sqrt{3}) \subset {\mathbb Q}(y). But {\mathbb Q}(\sqrt{2},\sqrt{3}) is an extension of {\mathbb Q} of degree 4, a contradiction.


You can make the above more elementary by showing successive powers of y are always of the form q_1y + q_2 with q_1 and q_2 rational and eventually showing some q_3\sqrt{2} + q_4\sqrt{3} must be rational, a contradiction.


analysis - Injection, making bijection

I have injection f \colon A \rightarrow B and I want to get bijection. Can I just resting codomain to f(A)? I know that every function i...