Thursday, November 30, 2017

How does trigonometry in a Galois field work?

This is a follow-up to this question. I'm interested in doing trigonometry in finite fields on a computer. I do not understand precisely how trigonometric functions are supposed to work in a finite (Galois) field. I've read the Wikipedia article but I'm having trouble understanding what sorts of angles and numbers are representable in finite fields.




Here is what I do understand:



Starting with the 2D Cartesian plane with coordinates x, y, we can represent discrete angles that are multiples of 90=π2. These are the fourth roots of unity x=cos2kπ4 and y=sin2kπ4 or alternatively:
z=coskπ2+isinkπ2, where k is a positive integer less than 4. These numbers can be represented solely with the integers. If we want to add discrete angles that are multiples of 30=2π12, we need a quadratic extension of the integers so that we have quadratic (algebraic) integers of the form a+b3. This allows us to represent the twelfth roots of unity as x and y coordinates. If we wish to double the number of angles to 15=2π24 multiples, we must extend our field again, forming a tower of quadratic extensions with numbers of the form (a+b3)+(c+d3)2. Numbers of this form allow us to represent the 24th roots of unity.



How does this work in a finite field? Can I choose a finite field such that I can exactly represent the nth roots of unity in a manner analogous to the above? I'm particularly interested in constructable numbers, which feature only quadratic extensions (and multiquadratic extensions like 5+5). In particular this means that n is restricted to having factors of 2 and Fermat primes. I restricted myself to powers of 2 and Fermat prime 3 in my example above. Both 12 and 24 have factors of only 2 and 3.



- Edit -




To try to clarify what I'm struggling with. I do not see how to find or use a finite field that has been extended twice or more (e.g. angles of π12 as described above), as the relationship to the complex plane in a finite field setting seems to blur as the tower of extensions grows.



This is a new subject for me, so I'd really appreciate an example or two to go along with any explanations.

sequences and series - Finding the ntextth term of frac14+frac1cdot34cdot6+frac1cdot3cdot54cdot6cdot8+ldots



I need help on finding the nth term of this infinite series?
s=14+1346+135468+

Could you help me in writing the general term/solving?


Answer



The first thing you can do is start with a1=14 and then realize that an+1=an2n12n+2
that doesn't seem to get you anywhere, however.



As commentator Tenali notes, you can write the numerator as 13(2n1)=123(2n)24(2n)=(2n)!2nn!



The denominator, on the other hand, is 46(2(n+1))=2n(23(n+1))=2n(n+1)!



So this gives the result:




a_n = \frac{(2n)!}{2^{2n} n!(n+1)!} = \frac{1}{2^{2n}}\frac{1}{n+1}\binom{2n}{n} = \frac{1}{2^{2n}}C_n



where C_n is the n^{\text{th}} Catalan number.



If all you want is then n^{\text{tn}} term, that might be enough - you can even skip the part about Catalan numbers and just write it as a_n=\frac{1}{4^n(n+1)}\binom{2n}n.



As it turns out, the Catalan numbers have a generating function (see the link above:)



\frac{2}{1+\sqrt{1-4x}} = \sum_{n=0}^\infty C_nx^n




So, if the series converges when x=\frac{1}{4}, it converges to 2.



(It does converge, since C_n \sim \frac{2^{2n}}{n^{3/2}\sqrt{\pi}}.)


Wednesday, November 29, 2017

real analysis - text{ Find } lim_{nrightarrow infty} frac{n}{a^n}



\text{Find } \\ \lim_{n\rightarrow \infty} \frac{n}{a^n} \\ \text{ where a > 0 is a number.}






\text{To solve this problem we first realize that we need to use l'hopitals rule} \\ \text{because if we evaluate at infinity we get } \\



\lim_{n\rightarrow \infty} \frac{n}{a^n} = \frac{\infty}{\infty} \\ \text{using l'hopitals rule we get } \\ \lim_{n \rightarrow \infty}\frac{1}{n \cdot x^{n-1}} = 0 \\ \text{If we take lhopitals rule again we get } \\ \lim_{n \rightarrow \infty} \frac{0}{(n-1) \cdot n \cdot x^{n-2}} =0 \\ \text{Thus } \ \ \lim_{n\rightarrow \infty} \frac{n}{a^n} = 0 \\ \text{Is this the correct procedure that one uses in order to find the limit for this problem?} \\ \text{Is there any mistakes in my solution?}



Answer



L'Hospital's rule is only applicable when you have a limit of the form



\frac{\infty}{\infty}



or



\frac{0}{0}



Thus we need to consider 3 cases: (a) $0, (b) a=1, and (c) a>1.




(a) For a<1 we have



\lim_{n\to\infty}\frac{n}{a^n}=\infty



because \lim_{n\to\infty}a^n=0 and \lim_{n\to\infty}n=\infty, so we have



$$\frac{\infty}{0}=\infty, \quad 0



(b) When a=1, we have




\lim_{n\to\infty}\frac{n}{a^n}=\lim_{n\to\infty}\frac{n}{1^n}=\lim_{n\to\infty}n=\infty, \quad a=1



(c) For a>1 we have



\lim_{n\to\infty}\frac{n}{a^n}=\frac{\infty}{\infty}



This is the only case where L'Hospital's rule is applicable. Here, write



\begin{align}\lim_{n\to\infty}\frac{n}{a^n}&=\lim_{n\to\infty}\frac{\text{d}}{\text{d}n}\left(\frac{n}{a^n}\right)\\ &=\lim_{n\to\infty}\frac{\text{d}}{\text{d}n}\left(\frac{n}{\exp(\ln(a^n))}\right)\\ &=\lim_{n\to\infty}\frac{\text{d}}{\text{d}n}\left(\frac{n}{\exp(n\ln(a))}\right)\\ &=\lim_{n\to\infty}\frac{\text{d}}{\text{d}n}\left(\frac{1}{\ln(a)\exp(n\ln(a))}\right)\\ &=\lim_{n\to\infty}\frac{\text{d}}{\text{d}n}\left(\frac{1}{\ln(a)a^n}\right)\\ &=\frac{1}{\infty}\\ \lim_{n\to\infty}\frac{n}{a^n}&=0, \quad a>1 \end{align}



I added extra steps to demonstrate how to perform the derivative of a^n.


calculus - Convergence of the next series



I'm trying to determine the convergence of this series:
\sum \limits_{n=1}^\infty\left(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n}\right)^a
I've tried using D'Alambert criteria for solving it.



\lim_{n->\infty}\frac{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n}\frac{2n}{2n+1})^a}{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n})^a} = \lim_{n->\infty}\left(\frac{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n}·\frac{2n}{2n+1})}{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n})}\right)^a
Which becomes:
\lim_{n->\infty}\left(\frac{2n}{2n+1}\right)^a
But after that, the limit is 1, so its convergence is unknown.
Any idea?


Answer



Let a_n={1\over2}\cdot{3\over4}\cdot{5\over6}\cdot\ \cdots\ \cdot {{2n-3}\over{2n-2}}\cdot{{2n-1}\over{2n}}.



Note that a_{n+1}=a_n\cdot {2n +1\over2(n+1)}.




We will show that for all n:
{1\over\sqrt{ 4n}} \le a_n\le {1\over\sqrt{ 2n+1}}.
Having established this, it will follow by comparison with p-series that
\sum\limits_{n=1}^\infty a_n^a converges if and only if a>2.



We establish the inequalities by induction.



Towards showing that a_n\le {1\over\sqrt{ 2n+1}}, first

note that a_1\le {1\over \sqrt{2\cdot 1+1}}.



Now
suppose a_n \le {1\over\sqrt{ 2n+1}}.
Then a_{n+1}^2\le \biggl[\,{2n +1\over 2(n+1)}\biggr]^2\cdot{1\over 2n+1} ={2n+1\over 4(n+1)^2}={2n+1\over 4n^2+8n+4}\le{2n+1\over 4n^2+8n+1 }={1\over 2n+1}.



So, a_n\le {1\over\sqrt {2n+1}} for all n.



Towards showing that {1\over\sqrt{ 4n}} \le a_n, first

note that a_1\ge {1\over \sqrt{4\cdot 1 }}.



Now
suppose a_n \ge {1\over\sqrt{ 4n}}.
Then a_{n+1}^2\ge \Bigl[\,{2n +1\over 2(n+1)}\,\Bigr]^2\cdot{1\over 4n}.
Now
\biggl[\,{2n +1\over 2(n+1)}\,\biggr]^2\cdot{1\over 4n}-{1\over 4(n+1)} ={1\over 16 n(n+1)^2}>0;
thus

a_{n+1}^2\ge{1\over 4(n+1)}.



So, a_n\ge {1\over\sqrt {4n}} for all n.


elementary number theory - Question about the Euclidean algorithm with multiple integers



I have tried to do this proof for the whole day, but I still have no idea how to prove it.




Here's the question:



Let a_1, a_2, \dots , a_k be integers with \gcd(a_1, a_2, \dots , a_k) = 1, i.e., the largest
positive integer dividing all of a_1, \dots , a_k is 1.



Prove that the equation
a_1u_1 + a_2u_2 + \dots + a_ku_k = 1
has a solution in integers u_1, u_2, \dots , u_k.



(Hint. Repeatedly apply the extended Euclidean algorithm. You may find it easier to prove a more general statement in which \gcd(a_1, \dots , a_k) is allowed to be larger than 1.)




Thanks a lot.


Answer



Let d be the smallest positive integer representable as a_1u_1+a_2u_2+\cdots +a_nu_n. Suppose that d\gt1, and let p be a prime dividing d. Let's write d=p\delta. What we have so far is



a_1u_1+a_2u_2+\cdots+ a_nu_n=p\delta



By the assumption that \gcd(a_1,a_2,\ldots a_n)=1, there is an a_i not divisible by p. Without loss of generality (or by relabeling the a's), let's assume that it's a_1 that's not divisible by p. This means a_1x+py=1 has a solution (from the usual Euclidean algorithm). Multiplying both sides of this by \delta makes this a_1x\delta+p\delta y=\delta. But we can write this out as



a_1(x\delta+u_1y)+a_2(u_2y)+\cdots+ a_n(u_ny)=\delta




where \delta=d/p\lt d, which contradicts the assumption that d is the smallest positive integer representable as a linear combination of the a_i's. Thus we must have d=1.



Added later: I wasn't particularly satisfied with assuming the n=2 case of what was to be proved. It finally dawned on me the proof is just as easy if you don't. Instead of writing a_1x+py=1, note simply that if p doesn't divide a_1, then we can certainly write



a_1-pk=r\text{ with } 0\lt r\lt p



Both inequalities are important: We need the remainder r to be positive as well as less than p. Multiplying both sides of this by \delta gives something that can be written out as



a_1(\delta-ku_1)-a_2(ku_2)-\cdots-a_n(ku_n)=r\delta\lt p\delta=d




which gives the same contradiction as before.


Tuesday, November 28, 2017

induction - Every natural number n greater than or equal to 6 can be written in the form n = 3k +4t for some k,t in N



Prove by strong induction that every natural number n \geq6 can be written in the form n=3k+4t for some k,t \in \mathbb{N}.




I'm not entirely comfortable with the concept of strong induction. I believe you assume that P(1), P(2), P(3),... P(n) are true to prove P(n+1) is. Now, how does that fit into this problem? Would the base case be where n=6?


Answer



P(1) through P(5) are vacuously true because 1 through 5 are not greater than or equal to 6. Your base cases are 6,7,8. Show by example that you can do each of them. Intuitively, you can just add enough 3s to get to any number. So assume it is true up to n, then to do n+1 you say that you can do n-2 and add a 3


decimal expansion - Is 0.9999... an integer?




Just out of curiosity, since
\sum_{i>0}\frac{9\times10^{i-1}}{10^i}, \quad\text{ or }\quad 0.999\ldots=1,
Does that mean 0.999\ldots=1, or in other words, that 0.999\ldots is an integer, by applying the transitive property?



Ty.


Answer



0.999999\ldots is indeed 1, which indeed is a natural number and therefore an integer.



sequences and series - Infinite sum of floor functions



I need to compute this (convergent) sum
\sum_{j=0}^\infty\left(j-2^k\left\lfloor\frac{j}{2^k}\right\rfloor\right)(1-\alpha)^j\alpha
But I have no idea how to get rid of the floor thing. I thought about some variable substitution, but it didn't take me anywhere.



Answer



We'll let M=2^k throughout.



Note that f(j)=j-M\left\lfloor\frac{j}{M}\right\rfloor



is just the modulus operator - it is equal to the smallest positive n such that j\equiv n\pmod {M}



So that means f(0)=0, f(1)=1,...f(M-1)=M-1, and f(j+M)=f(j).



This means that we can write:




F(z)=\sum_{j=0}^{\infty} f(j)z^{j}= \left(\sum_{j=0}^{M-1} f(j)z^{j}\right)\left(\sum_{i=0}^\infty z^{Mi}\right)



But \sum_{i=0}^\infty z^{Mi} = \frac{1}{1-z^{M}}



and f(j)=j for j=0,...,2^k-1, so this simplifies to:



F(z)=\frac{1}{1-z^{M}}\sum_{j=0}^{M-1} jz^j



Finally, \sum_{j=0}^{M-1} jz^j = z\sum_{j=1}^{M-1} jz^{j-1} =z\frac{d}{dz}\frac{z^M-1}{z-1}=\frac{(M-1)z^{M+1}-Mz^{M}+z}{(z-1)^2}




So:



F(z)=\frac{(M-1)z^{M+1}-Mz^{M}+z}{(z-1)^2(1-z^{M})}



Your final sum is: \alpha F(1-\alpha) bearing in mind, again, that M=2^k


elementary set theory - Bijective function with different domain and co-domain element count

To be bijective is to be both injective and surjective.




Which in other words, have to have a one-on-one match right?



Then how am I supposed to come up with a bijective function if the domain has a even number of naturals and the co-domain has a odd number of naturals?



For example, if the Domain: \{0,1\} | Co-domain: \{3,4,5\} ==> even in this situation, it will not be surjective because not all of the co-domain are hit, and it can't be that an element in the domain hits two or more elements in the codomain because then it won't be injective!



Please help me find one if its even possible.

real analysis - How many times can these two polynomial intersect with each other?

Suppose g(x)=a_1x+a_2x^2+...+a_kx^k and f(x)=b_jx^j where a_1,a_2...a_k>0 , j\in \{1,2.....,k-1\}, b_j >0 and x\geq0.



Intuitively, I think they can have at most two intersections. Is that correct?



Well, the answer is it has two positive roots by Descartes' rule of signs. Thanks for your help!

Monday, November 27, 2017

algebra precalculus - How to calculate sqrt{frac{-3}{4} - i}





I know that the answer to \sqrt{\dfrac{-3}{4} - i} is \dfrac12 - i. But how do I calculate it mathematically if I don't have access to a calculator?


Answer



One of the standard strategies (the other strategy is to do what JM suggested in the comment to the qn) is to complete the square and use the fact that i^2 = -1.


\sqrt{\frac{-3}{4}-i}



Add and subtract 1 to get:


\sqrt{\frac{-3}{4}+1-i-1}


Use i^2 = -1 to get:


\sqrt{\frac{-3}{4}+1-i+i^2}


Simplify \frac{-3}{4}+1 to get:


\sqrt{\frac{1}{4}-i+i^2}


Rewrite -i as -2 \frac{1}{2}i to get:


\sqrt{\frac{1}{2^2}-2 \frac{1}{2}i+i^2}


Complete the square to get:


\sqrt{(\frac{1}{2}-i)^2}



Get rid of the square root to get:


\frac{1}{2}-i


abstract algebra - Is the square root of a complex number over a field always well-defined?

A complex number over a field F is defined as a+b\text{i} where a,b\in F and \text{i} is the square root of the inverse of the multiplicative identity of F, denoted as \text{i}^2 = -1. I have several questions about this definition.





  1. Is there any restriction on F in addition to that the square roots of -1 must exist? Should F be a quadratically closed field, i.e. every number in F has a square root?


  2. How square root of a complex number is defined? The square root of a complex number over \Bbb R can be defined by De Moivre's formula, but I don't see how this is extended to an arbitrary field.






For question 2, I can imagine a simple definition. For a complex number c=\alpha+\beta\text{i}, we can define the square root of c as a complex number a+b\text{i} st. a^2-b^2=\alpha,2ab=\beta . But I don't know if this definition is appropriate.



Any help and reference is appreciated. Thank you!

algebra precalculus - How to find summation of the series 1 + frac {1}{3} + frac {1cdot 3}{3cdot 6} +cdots

How to find summation of the series 1 + \dfrac {1}{3} + \dfrac {1\cdot 3}{3\cdot 6} + \dfrac {1\cdot 3\cdot 5}{3\cdot 6\cdot 9} + \dfrac {1\cdot 3\cdot 5\cdot 7}{3\cdot 6\cdot 9\cdot 12} + ..... ?



I can't find any particular sequence.Please help!

gcd and lcm - GCD of two big numbers

How to find gcd(5^{100}-1, 5^{120}-1)?
The problem is numbers are really big (5^{100} is 70 digits). None of the numbers is prime.
I ran Euclid's algorithm on Python and found the answer in less than a second, however it seems that there is no on-paper approach to this problem, is it?

real analysis - Proving differentiability implies continuity using sequential definition of derivatives



I have seen many proofs using this approach:



Let us suppose that f is differentiable at x_0. Then
\lim_{x\to x_0} \frac{f(x) - f(x_0)}{x-x_0} = ‎f^{\prime} ‎(x)



and hence




\lim_{x\to x_0} f(x) - f(x_0) = lim_{x\to x_0} \left[ \frac{f(x) - f(x_0)}{x-x_0} \right] \cdot lim_{x\to x_0} (x-x_0) = 0



We have therefore shown that, using the definition of continuous, if the function is differentiable at x_0, it must also be continuous.



However, I was wondering if you can use this same proof using the sequential definition of differentiability that states:




If f is a function and has derivative f'(c) at the point c in the domain of f means that if (a_n)_{n=1}^{\infty} is any sequence converging to c such that a_n \not= cis in the domain of f for all n \in \mathbb{N}, then: \left[ \frac{f(x_n)-f(c)}{x_n-c}\right]_{n=1}^{\infty}converges to f'(c)





My attempt using this definition:



\left(\frac{f(x_n)-f(c)}{x_n-c}\right)_{n=1}^{\infty}. Let \epsilon >0. Then |\frac{f(x_n)-f(c)}{x_n-c}-f'(c)| < \epsilon <=> |f(a_n)-f(c)|<(\epsilon + |f'(c)|)|a_n-c|



I thought this could be the start to a proof similar to the one above, but I am stuck after this point. I'm not sure if I have to use the delta-epsilon or sequential definition of continuity to continue with this proof, or if there is another way. Any suggestions would be appreciated.


Answer



I presume x_n is the same as a_n.



If \left|\frac{f(x_n)-f(c)}{x_n-c}-f'(c)\right| < \epsilon for all large n,

then the fact that \left|\frac{f(x_n)-f(c)}{x_n-c}\right|-|f'(c)| \le \left|\frac{f(x_n)-f(c)}{x_n-c}-f'(c)\right|
implies
|f(x_n)-f(c)| \le (|f'(c)|+\epsilon)|x_n-c| for all large n.
Then taking n \to \infty, we have |x_n-c| \to 0 so |f(x_n)-f(c)| \to 0.



If you must use \epsilon-\delta notation, then note that for sufficiently large n we have |x_n-c| < \frac{\epsilon'}{|f'(c)|+\epsilon} so that |f(x_n)-f(c)| < \epsilon'.


Sunday, November 26, 2017

n algebraically independent elements in a field of fractions implies n algebraically independent elements in the k-algebra



Let A be a k-alegbra and B be its field of fractions. Suppose \{\frac{f_{1}}{g_{1}},...,\frac{f_{n}}{g_{n}}\} is an algebraically independent set in B. Question: Are there n algebraically independent elements in A?



I can show that either f_{i} or g_{i} is algebraically independent for each of the fractions, using facts about field extensions. So suppose each f_{i} is algebraically independent in A. Does this imply that \{f_{1},...,f_{n}\} is an algebraically independent set in A? If not, how can I show that there are at least n algebraically independent elements in A? My two ideas are (1) use linear combinations of the f_{i}, and (2) successively choose different fractions in B that guarantee that \{f_{1},...,f_{n}\} is algebraically independent. I am not sure how to go about proving that either of these two ways actually works.



I believe some of the comments of the following are relevant:
Extension of residue fields and algebraic independence



Answer



Let T=\{f_1,\dots,f_n,g_1,\dots,g_n\}. Replacing A by k[T] and B by k(T), we may assume A is generated by T.



Now let S\subseteq T be a maximal algebraically independent subset of T. By maximality of S, every element of T is algebraic over k(S), and hence so is every element of B, since B is generated by T as a field extension of k(S). So S is a transcendence basis for B over k. But since \{\frac{f_{1}}{g_{1}},...,\frac{f_{n}}{g_{n}}\} is algebraically independent, B has transcendence degree at least n over k. It follows that S has at least n elements.


integration - Integrating int^{infty}_0 e^{-x^2},dx using Feynman's parametrization trick



I stumbled upon this short article on last weekend, it introduces an integral trick that exploits differentiation under the integral sign. On its last page, the author, Mr. Anonymous, left several exercises without any hints, one of them is to evaluate the Gaussian integral
\int^\infty_0 e^{-x^2} \,dx= \frac{\sqrt{\pi}}{2}
using this parametrization trick. I had been evaluating it through trial and error using different paramatrizations, but no luck so far.







Here are what I have tried so far:




  • A first instinct would be do something like: I(b) = \int^\infty_0 e^{-f(b)x^2}\,dx
    for some permissible function f(\cdot), differentiating it will lead to a simple solvable ode:

    \frac{I'(b)}{I(b)} = -\frac{f'(b)}{2f(b)}
    which gives:
    I(b) = \frac{C}{\sqrt{f(b)}}.
    However, finding this constant C basically is equivalent to evaluating the original integral, we are stuck here without leaving this parametrization trick framework.


  • A second try involves an exercise on the same page:
    I(b) = \int^\infty_0 e^{-\frac{b^2}{x^2}-x^2}dx.
    Taking derivative and rescaling the integral using change of variable we have:
    I'(b) = -2I(b).
    This gives us another impossible to solve constant C in:
    I(b) = C e^{-2b}

    without leaving this framework yet again.


  • The third try is trying modify Américo Tavares's answer in this MSE question:
    I(b) = \int^\infty_0 be^{-b^2x^2}\,dx.
    It is easy to show that:
    I'(b) = \int^\infty_0 e^{-b^2x^2}\,dx - \int^\infty_0 2b^2 x^2 e^{-b^2x^2}\,dx = 0
    by an integration by parts identity:

    \int^\infty_0 x^2 e^{- c x^2}\,dx = \frac{1}{2c}\int^\infty_0 e^{- c x^2}\,dx .
    Then I(b) = C, ouch, stuck again at this constant.







Notice in that Proving \displaystyle\int_{0}^{\infty} e^{-x^2} dx = \frac{\sqrt \pi}{2} question, Bryan Yocks's answer is somewhat similar to the idea of parametrization, however he has to introduce another parametric integration to produce a definite integral leading to \arctan.




Is there such a one shot parametrization trick solution like the author Anonymous claimed to be "creative parameterizations and a dose of differentiation under the integral"?


Answer



Just basically independently reinvented Bryan Yock's solution as a more 'pure' version of Feynman.



Let I(b) = \int_0^\infty \frac {e^{-x^2}}{1+(x/b)^2} \mathrm d x = \int_0^\infty \frac{e^{-b^2y^2}}{1+y^2} b\,\mathrm dy so that I(0)=0, I'(0)= \pi/2 and I(\infty) is the thing we want to evaluate.



Now note that rather than differentiating directly, it's convenient to multiply by some stuff first to save ourselves some trouble. Specifically, note



\left(\frac 1 b e^{-b^2}I\right)' = -2b \int_0^\infty e^{-b^2(1+y^2)} \mathrm d y = -2 e^{-b^2} I(\infty)




Then usually at this point we would solve the differential equation for all b, and use the known information at the origin to infer the information at infinity. Not so easy here because the indefinite integral of e^{-x^2} isn't known. But we don't actually need the solution in between; we only need to relate information at the origin and infinity. Therefore, we can connect these points by simply integrating the equation definitely; applying \int_0^\infty \mathrm d b we obtain



-I'(0)= -2 I(\infty)^2 \quad \implies \quad I(\infty) = \frac{\sqrt \pi} 2


abstract algebra - How to divide a number from both sides of from congruence equation from 79^{80}equiv 1 pmod{100} to 79^{79}equiv x pmod{100}?



This problem is to solve 79^{79} \equiv x \pmod{100}. I'm aware this may be solved by binomial expansion or other methods. But when we apply Euler's theorem we obtain 79^{80} \equiv 1 \pmod{100}, which seems to be very close to our goal. I just need to divide 79 from both sides.



Now I can do this using a stupid method: by subtracting 100 from LHS to obtain -99, -199, -299,... until "X99" is divisible by 79. I then find that 79 \times(-81)=-6399. So we obtain 79^{80} \equiv -6399 \pmod{100} and divides 79 on both sides as 79 is coprime of 100. This gives me 79^{79}\equiv-81\equiv19 \pmod{100}.



My question is if there is a more systematic/standard way of carrying out a division on both sides, perhaps something related to "inverse" etc. A group theory/ring theory approach is welcome as well.


Answer



Generally this form of the extended Euclidean algorithm is easiest, but here below is quicker.




\!\bmod 100\!:\ (\color{#c00}{80\!-\!1})(80\!+\!1)\equiv -1,\ because \ \color{#0a0}{80^2\equiv 0}



therefore: \ \ \ \color{#c00}{79}^{-1}\equiv -81\equiv \bbox[4px,border:1px solid #c00]{19}\ Generally if \,\color{#0a0}{a^n\!\equiv 0}\, this iinverts 1\!-\!a\, [unit + nilptotent] by using a terminating geometric series: \ \dfrac{1}{1\!-\!a} \equiv \dfrac{1-\color{#0a0}{a^n}^{\phantom{|^|}}\!\!\!\!\!}{1-a}\equiv 1\!+\!a\!+\cdots + a^{n-1}






Or using a fractional form of the Extended Euclidean Algorithm, and \,79\equiv \color{#90f}{-21}\!:



{\rm mod}\ 100\!:\,\ \dfrac{0}{100} \overset{\large\frown}\equiv \dfrac{1}{\color{#90f}{-21}} \overset{\large\frown}\equiv \dfrac{\color{#c00}5}{\color{#0a0}{-5}} \overset{\large\frown}\equiv \dfrac{19}1\, or, in equational form




\ \ \ \ \ \ \begin{array}{rrl} [\![1]\!]\!:\!\!\!& 100\,x\!\!\!&\equiv\ \ 0\\ [\![2]\!]\!:\!\!\!& \color{#90f}{-21}\,x\!\!\!&\equiv\ \ 1\\ [\![1]\!]+5[\![2]\!]=:[\![3]\!]\!:\!\!\!& \color{#0a0}{{-}5}\,x\!\!\!&\equiv\ \ \color{#c00}5\\ -[\![2]\!]+4[\![3]\!]=:[\![4]\!]\!:\!\!\!& x\!\!\! &\equiv \bbox[4px,border:1px solid #c00]{19}\\ \end{array}







Or \bmod 100\!:\,\ { \dfrac{-1}{-79}\equiv\dfrac{99}{21}\equiv \dfrac{33}7\,\overset{\rm\color{#c00}{R}_{\phantom{|}}}\equiv\, \dfrac{133}7}\equiv \bbox[4px,border:1px solid #c00]{19}\,\ by \,\small\rm\color{#c00}R = inverse Reciprocity.






Or by CRT: \bmod \color{#0a0}{25}\!:\ x\equiv {\large \frac{1}{79}\equiv \frac{1}4\equiv \,\frac{\!\!-24}4}\equiv \color{#0a0}{-6}.\ \!\bmod\color{#c00} 4\!:\ x\equiv {\large \frac{1}{79}\equiv \frac{1}{-1}}\equiv -1,\ so -1^{\phantom{|^|}}\!\!\!\equiv x \equiv \color{#0a0}{6\!+\!25}j\equiv 2\!+\!j\iff \color{#c00}{j\equiv 1} \iff x = -6\!+\!25(\color{#c00}{1\!+\!4n}) = \bbox[4px,border:1px solid #c00]{19}^{\phantom{|}}\!+\!100n



Beware Modular fraction arithmetic is valid only for fractions with denominator coprime to the modulus. In particular it is valid to cancel \,3\, in \,99/21\, above. See here for further discussion.


Saturday, November 25, 2017

real analysis - Whether there is a continuous bijection from (0,1) to closed interval [0,1].

Is there a continuous bijection from open interval (0,1) to [0,1]. The answer is not. How to prove?



I think it may proceed by contradiction and apply open mapping theorem. However, (0,1) is not complete. I get stuck.

functional equations - Do there exist functions satisfying f(x+y)=f(x)+f(y) that aren't linear?




Do there exist functions f : \mathbb{R} \rightarrow \mathbb{R} such that f(x+y)=f(x)+f(y), but which aren't linear? I bet you they exist, but I can't think of any examples.



Furthermore, what hypotheses do we need to put on f before no such functions exist? I feel continuity should be enough.


Answer



Yes continuity is enough: You can quickly show that f(x)=x\cdot f(1) for x\in\mathbb N, then for x\in\mathbb Z and then for x\in\mathbb Q; assuming continuity, this implies validity for all x\in\mathbb R.



Any other functions only exist per Axiom of Choice: View \mathbb R as a vector space over \mathbb Q and take any \mathbb Q-linear map (which need not be \mathbb R-linear).


Multiples of 2 numbers that differ by 1

I have 2 known positive integers, a and b . Is there a ' standard ' formula to find the lowest (if possible) positive integers x and y , so that the following is true?




xa = yb + 1


calculus - What is the practical difference between a differential and a derivative?

I ask because, as a first-year calculus student, I am running into the fact that I didn't quite get this down when understanding the derivative:




So, a derivative is the rate of change of a function with respect to changes in its variable, this much I get.



Thing is, definitions of 'differential' tend to be in the form of defining the derivative and calling the differential 'an infinitesimally small change in x', which is fine as far it goes, but then why bother even defining it formally outside of needing it for derivatives?



And THEN, the bloody differential starts showing up as a function in integrals, where it appears to be ignored part of the time, then functioning as a variable the rest.



Why do I say 'practical'? Because when I asked for an explanation from other mathematician parties, I got one involving the graph of the function and how, given a right-angle triangle, a derivative is one of the other angles, where the differential is the line opposite the angle.



I'm sure that explanation is correct as far it goes, but it doesn't tell me what the differential DOES, or why it's useful, which are the two facts I need in order to really understand it.




Any assistance?

Functions-Set Theory Proof that f(C cup D) = f(C) cup f(D)











I'm revisiting set theory and am troubled by this question.



Let f:A \rightarrow B, and C \subset A, D \subset A.



Prove that f(C \cup D) = f(C) \cup f(D).



Any thoughts?


Answer



I'll show \subseteq. Let y\in f(C\cup D). Then there exists an x\in C\cup D such that f(x)=y. This means x\in C or x\in D, hence f(x)\in f(C) or f(x)\in f(D). This implies f(x)\in f(C)\cup f(D) and we've established f(C\cup D)\subseteq f(C)\cup f(D). Approach the other containment in a similar manner.



Find the limits without L'Hôpital:lim_{ x to 0 }frac{x-sin x}{x-tan x}=?

Find the limits without L'Hôpital's rule
\lim_{ x \to 0 }\frac{x-\sin x}{x-\tan x}=?
My Try:
\lim_{ x \to 0 }\frac{\sin(\pi-x)-\sin x}{\tan(\pi+x)-\tan x}=?\\\lim_{ x \to 0 }\frac{2\sin(\frac{\pi}{2}-x)\cos(\frac{\pi}{2})}{\frac{\sin(\frac{π}{2}-x)}{\cos(\pi+x)\cos(x)}}=\lim_{ x \to 0 }\frac{(2\cos x)(-\cos x)(\cos(\frac{\pi}{2}))}{\cos x}=0
but:
\lim_{ x \to 0 }\frac{x-\sin x}{x-\tan x}=-1/2
Where is my mistake?

Friday, November 24, 2017

calculus - Why does the Mean Value Theorem require a closed interval for continuity and an open interval for differentiability?



Why does the Mean Value Theorem assume a closed interval for continuity and an open interval for differentiability?



The MVT says: Let f be a continuous function on [a,b] that is differentiable on (a,b), then....




Is there any example where one of them isn't true so that the MVT is not true?


Answer



Relax the first constraint: Let f:[0,1] \to \mathbb R so that f(0) = 1,f(x) = 0 for x \in \left]0,1\right]. Then (f(1) - f(0))/(1-0) = -1 but f'(x) = 0 on ]0,1[.



Relax the second contraint: Let f(x) = |x| on [-1,1], then (f(1)-f(-1))/(1-(-1)) = 0 but f'(x) = 0 nowhere.


number theory - Prove that among any 12 consecutive positive integers there is at least one which is smaller than the sum of its proper divisors



Prove that among any 12 consecutive positive integers
there is at least one which is smaller than the sum of
its proper divisors. (The proper divisors of a positive
integer n are all positive integers other than 1 and n
which divide n. For example, the proper divisors of 14 are 2
and 7)



Answer



Hint: Among any 12 consecutive positive integers, there is one that is a multiple of 12.



Can you show that 12n is smaller than the sum of its divisors for any positive integer n?


reference request - Overview of basic results on cardinal arithmetic

Are there some good overviews of basic formulas about addition, multiplication and exponentiation of cardinals (preferably available online)?

Thursday, November 23, 2017

linear algebra - Formula for determinant of diagonals block matrix



Is there any formula for determinant of a matrix A that looks like:



A = \begin{pmatrix} \textbf{B}_{11} & \cdots & \textbf{B}_{1L} \\ \vdots & & \vdots \\ \textbf{B}_{L1} & \cdots & \textbf{B}_{LL} \end{pmatrix}




where each \textbf{B}_{ij} is a n \times n diagonal matrix ?



Edit: I have found something on a paper. If we write each \textbf{B}_{ij} = \cdot \begin{pmatrix} {b}^{(ij)}_1 & \ & 0 \\ \ & \ddots & \ \\ 0 & \ & {b}^{(ij)}_n \end{pmatrix}
it seems that
\det(A) = \displaystyle\prod_{k=1}^{n}\det \begin{pmatrix} b^{(11)}_k & \cdots & b^{(1L)}_k \\ \vdots & & \vdots \\ b^{(L1)}_k & \cdots & b^{(LL)}_k \end{pmatrix}
Any hint on proving it?



Answer



Yes. Let \mathbf C_{ij} be the L \times L matrix defined by
\mathbf C_{ij}(p,q) = \mathbf B_{pq}(i,j)
Notably, we find that \mathbf C_{ij} = 0 when i \neq j. By rearranging the rows and columns of A appropriately, we see that there is a permutation matrix P such that
PAP^{-1} = \pmatrix{\mathbf C_{11} & & \cdots & \mathbf C_{1n} \\ \\ \vdots && \ddots & \vdots\\ \mathbf C_{n1} & & \cdots & \mathbf C_{nn}} = \pmatrix{\mathbf C_{11} & 0& \cdots & 0 \\ 0 & \mathbf C_{22}& \ddots & \vdots \\ \vdots & \ddots & \ddots & 0\\ 0 & \cdots & 0 & \mathbf C_{nn}}
Conclude that
\det(A) = \det(\mathbf C_{11}) \det(\mathbf C_{22}) \cdots \det(\mathbf C_{nn})







In particular, P is the permutation matrix corresponding to \tau:\{1,\dots,nL\} \to \{1,\dots, nL\} defined by
\tau(1 + (i-1) + n(j-1)) = 1 + (j-1) + L(i-1) \qquad 1 \leq i \leq n, \quad 1 \leq j \leq L
Notably: if x \in \Bbb R^L, y \in \Bbb R^n, and \otimes denotes the Kronecker product, then
P(x \otimes y) = y \otimes x



sequences and series - Does sum_{n=1}^infty frac{1}{phi(n)^s} have a euler product?




Does \sum_{n=1}^\infty \frac{1}{\phi(n)^s}



have a euler product and functional equation? \phi(n) is the euler phi function.



Since \phi(n) is multiplicative I think the series could have a euler product and functional equation.



\sum_{n=1}^\infty \frac{1}{\phi(n)^s}= \sum_{n=1}^\infty \frac{a(n)}{n^s}



where a(n) is sequence https://oeis.org/A058277 in the OEIS.







I did some research and found many relations between the zeta function and other special functions such as:



\sum_{n=1}^\infty \frac{\phi(n)}{n^s}=\frac{\zeta(s-1)}{\zeta(s)}.


Answer



Firstly, \phi(p^k)=p^k\left(1-\frac1p\right)\qquad{\text{$k\ge1$, prime $p$}}



Define f(k)=\phi(k)^{-s}.




For the moment, consider the set of integers S_N=\{k\,|\,k=p_1^{a_1}p_2^{a_2}\cdots p_N^{a_N},a_{(\cdot)}\ge1\}.



Then,
\begin{align} \sum_{k\in S_N}f(k) &=\sum^\infty_{a_N=1}\cdots\sum^\infty_{a_1=1} \left[p_1^{a_1}\cdots p_N^{a_N} \left(1-\frac1{p_1}\right)\cdots\left(1-\frac1{p_1}\right)\right]^{-s} \\ &=\left(\prod^N_{i=1}\frac1{1-1/p_i}\right)^s\cdot\prod^N_{j=1}\sum^\infty_{a_j=1}p_j^{-a_js} \\ &=\prod^N_{i=1}\frac{(1-1/p_i)^{-s}}{p_i^s-1} \end{align}



Define f_i:=\frac{(1-1/p_i)^{-s}}{p_i^s-1}



Now we want to find
\displaystyle{\sum_{k\in S^*_N}f(k)} where S^*_N=\{k\,|\,k=p_1^{a_1}\cdots p_N^{a_N},a_{(\cdot)}\color{red}{\ge0}\}



Summing f(k) over all the elements in S_N^* with a_\alpha=0 and other indexes non-zero gives \displaystyle{\frac1{f_\alpha}\prod^N_{i=1}f_i}.




How about having two zero indexes a_\alpha,a_\beta? \displaystyle{\frac1{f_\alpha f_\beta}\prod^N_{i=1}f_i}.
Three?
\displaystyle{\frac1{f_\alpha f_\beta f_\gamma}\prod^N_{i=1}f_i}.



Summing all these and factorizing naturally give
\sum_{k\in S^*_N}f(k)=\left(1+\frac1{f_1}\right)\left(1+\frac1{f_2}\right)\cdots\left(1+\frac1{f_N}\right)\cdot\prod^N_{i=1}f_i=\prod^N_{i=1}(1+f_i)



Taking the limit N\to\infty, we find that
\sum^\infty_{n=1}\frac1{\phi(n)^s}=\prod_{\text{prime }p}\left(1+\frac{(1-1/p)^{-s}}{p^s-1}\right)




I am still working on the functional equation. It is clear that the function is holomorphic on \text{Re }s>1, and is likely to have a pole at s=1, as plugging in s=1 gives \prod_p\left(1+\frac1p\right)=\infty.



A few more words on analytic continuation:



Obviously,
\begin{align} F(s):=\sum^\infty_{n=1}\frac1{\phi(n)^s} &=\prod_{\text{prime }p}\left(1+\frac{(1-1/p)^{-s}}{p^s-1}\right)\\ &=\prod_{p}\frac1{1-p^{-s}}\cdot\prod_p[1-p^{-s}+(p-1)^{-s}] \\ &=\zeta(s)\cdot \underbrace{\prod_p[1-p^{-s}+(p-1)^{-s}]}_{G(s)} \end{align}




  1. G(s) converges for \text{Re }s>0, as 1-p^{-s}+(p-1)^{-s}=1+sp^{-s-1}+O(p^{-s-2}).

  2. Therefore, F(s) has a simple pole at s=1 due to zeta function. The residue there equals \operatorname*{Res}_{s=1}F(s)=\prod_p\left(1+\frac1{p(p-1)}\right)=\frac{315\zeta(3)}{2\pi^4}
    (See Landau’s totient constant.)

  3. \lim_{s\to0^+}G(s)=1 This can be seen by plugging in s=0 into the above expression.

  4. Let's look at G'(0).

    \frac{G'(s)}{G(s)}=\sum_p\frac{p^{-s}\ln p-(p-1)^{-s}\ln(p-1)}{1-p^{-s}+(p-1)^{-s}}
    G'(0)=-G(0)\sum_p\ln\left(1-\frac1p\right)=\infty
    As G(0) is finite and G'(0) is not, this suggests that 0 is a branch point of F(s). Thus, meromorphic continuation is not possible.






Further analysis shows that \text{Re }s=0 is the natural boundary of F(s).



Firstly, by means of successive series expansion, we obtain

\begin{align} \ln(1-p^{-s}+(p-1)^{-s}) &=\sum^\infty_{n=1}\frac{\left[p^{-s}-(p-1)^{-s}\right]^n}{n} \\ &=\sum^\infty_{n=1}\frac{1}{n}\sum^n_{r=0}\binom nr p^{-s(n-r)}(-1)^r(p-1)^{-sr} \\ &=\sum^\infty_{n=1}\frac{p^{-ns}}{n}\sum^n_{r=0}\binom nr (-1)^r\left(1-\frac1p\right)^{-sr} \\ &=\sum^\infty_{n=1}\frac{p^{-ns}}{n}\sum^n_{r=0}\binom nr (-1)^r\sum^\infty_{k=0}\binom{-sr}{k}\frac{(-1)^k}{p^k} \\ &=\sum^\infty_{n=1}\sum^\infty_{k=0}\alpha_{n,k}(s)\frac1{p^{k+ns}} \end{align}




where \alpha_{n,k}(s)=\frac{(-1)^k}{n}\sum^n_{r=0}(-1)^r\binom nr \binom{-sr}{k}



Furthermore, notice that
\alpha_{n,0}(s)=\frac1n\sum^n_{r=0}(-1)^r\binom nr=0



Therefore,
\ln(1-p^{-s}+(p-1)^{-s})=\sum_{(n,k)\in\mathbb N^2}\alpha_{n,k}(s)\frac1{p^{k+ns}}
\begin{align} \implies \ln G(s) &=\sum_p \ln(1-p^{-s}+(p-1)^{-s}) \\ &=\sum_{(n,k)\in\mathbb N^2}\alpha_{n,k}(s)\zeta_{\mathbb P}(k+ns) \\ \end{align}
where \zeta_{\mathbb P} is the prime zeta function.



It is well known that \zeta_{\mathbb P} has the natural boundary \text{Re }s=0, because \mathcal S (the set of singularities of \zeta_{\mathbb P}) clusters on the imaginary axis. Hence, obviously G(s) cannot be analytically continued across \text{Re }s=0. A functional equation does not exist.



Meanwhile, we obtained a representation of F(s) in terms of well-known functions:





\sum^\infty_{n=1}\frac1{\phi(n)^s} =\zeta(s)\exp\left[\sum_{(n,k)\in\mathbb N^2}\alpha_{n,k}(s)\zeta_{\mathbb P}(k+ns)\right]
\alpha_{n,k}(s)=\frac{(-1)^k}{n}\sum^n_{r=0}(-1)^r\binom nr \binom{-sr}{k}



Complicated problem of Real analysis and set theory from NBHM 2006




Let f be a real-valued function on \Bbb{R}. Consider the functions


w_j(x) = \sup \left\{ \left|f(u)-f(v)\right| : u,v \in \left[x-\frac{1}{j},x+\frac{1}{j}\right] \right\}


where j is a positive integer and x \in\Bbb{R}. Define next


A_j,_n = \left\{x \in \Bbb{R}: w_j(x)\lt \frac{1}{n}\right\}, n=1,2,3,...


and


A_n = \underset{j=1}{\overset{\infty}\cup}A_{j,n}, n=1,2,3,...


Now let C= \left\{x \in \Bbb{R} : f \text{ is continuous at } x \right\}.


Express C in terms of the sets A_n.


Answer given in solution set as C = \underset{n=1}{\overset{\infty}\cap}A_n




So this question was asked in 2006 NBHM PhD scholarship exam (India). I have tried to understand it but failed;


then I tried using trivial functions like constant function and Identity function ( which are continuous on \Bbb{R} ).


When I took f equal to the constant function, I got w_j(x) = \{ 0 \} for each j


and then A_{j,n} = \Bbb{R}, and hence A_n=\Bbb{R} for each n.


And hence C (here\Bbb{R}) can be written as an intersection of A_n's.


When I tried f as the Identity function, calculations became more complicated and eventually, I gave up.


I know that this problem should not be solved by using examples,


I have to find a solution which will work for every real-valued function (Continuous or Discontinuous).


But I'm unable to do so. Please help.


Answer




First lets write down a definition of continuity. A function f is continuous at x iff for every n \in \mathbb{N} there exists a j \in \mathbb{N} such that \sup \{ |f(u) - f(v)| : u,v \in [x- \frac{1}{j}, x + \frac{1}{j}] \} < \frac{1}{n}. (This is worded a little differently to the usual \varepsilon-\delta definition but it's fairly easy to see it's equivalent.)


Now we simply reword this in terms of the sets in your question. First note that for fixed n \in \mathbb{N} there exists a suitable j iff x \in A_{j,n} for some j \in \mathbb{N} or equivalently iff x \in A_n. So f is continuous at x iff there is a suitable j for each choice of n which is exactly when x \in \bigcap_{n \in \mathbb{N}} A_n which is the desired result.


vector spaces - Explicit example of non constant linear functional f: mathbb R to mathbb Q?

Recall that V=\mathbb R is a uncountably dimension vector space over \mathbb Q as countable dimension vector space over \mathbb Q is itself countable.



Is there any explicit example of a non constant linear functional f: \mathbb R \to \mathbb Q ?



Existence of such linear functional is almost trivial but can we give the explicit example of such 1-form? Also it is clear that under usual topology such a map f cannot be continuous as \mathbb Q is totally disconnected.

Wednesday, November 22, 2017

induction - Is this backwards reasoning?



Yesterday I was answering a question on induction:
Certain step in the induction proof \sum\limits_{i=0}^{n} i^2 = \frac{n(n+1)(2n+1)}{6} unclear



Basically, I was proving a certain formula using induction.




\sum\limits_{i=0}^{n} i^2 = \frac{n(n+1)(2n+1)}{6}



The base case it's okay. Now let's assume the formula is valid for N, we want to demonstrate the following, that is



\sum\limits_{i=0}^{N+1} i^2 = \frac{(N+1)(N+2)(2(N+1)+1)}{6} \ \ (1)



that is to say



\sum\limits_{i=0}^{N} i^2 + (N + 1)^2 = \frac{(N+1)(N+2)(2(N+1)+1)}{6}\ \ (2)




\Rightarrow (thanks to induction hypothesis)



\frac{N(N+1)(2N+1)}{6} + (N+1)^2 = \frac{(N+1)(N+2)(2(N+1)+1)}{6}\ \ (3)



Then I concluded that if one show that (3) is true (by simplifying, and getting 0 = 0) then the proof is valid and complete.



Some argue this is backwards reasoning; but I can't understand why.



The equalities that I use to go from (1) to (2) to (3) can be used for going from (3) to (2) to (1)




My argument is, if (3) simplifies to 0=0, so it is equivalent with that and therefore True, also (2) is True, and also (1) is true, which was what I wanted to prove.



Is this backwards reasoning, and if so someone please explain me why


Answer



EDIT: The question has been updated, so the answer is mostly irrelevant (but it still shows how one word here or there can change quite a lot :-) )






I don't think your reasoning was backwards, but its presentation and wording might have been understood so. Specifically, stating "we have" implies it's something we either assume or have already proven rather than something we're trying to prove. Presenting the same reasoning slightly differently could avoid the ambiguity:




Let's evaluate the sum for N+1:
\sum_{i=1}^{N+1} i^2 = \sum_{i=1}^{N} i^2 + (N+1)^2
The induction hypothesis tells us that \sum_{i=1}^{N} i^2 = \frac{1}{6}N(N+1)(2N+1) so
\sum_{i=1}^{N} i^2 + (N+1)^2 = \frac{1}{6}N(N+1)(2N+1) + (N+1)^2
Simplifying the right-hand side yields
\frac{1}{6}N(N+1)(2N+1) + (N+1)^2 = \frac{1}{6}(N+1)\left((N+1)+1\right)\left(2(N+1)+1\right)



Finally, combining the equalities shows that the statement holds for N+1 as well, thus completing the inductive step.


real analysis - Finding limlimits_{n rightarrow infty}left(int_0^1(f(x))^n,mathrm dxright)^frac{1}{n} for continuous f:[0,1]to[0,infty)





Find \lim_{n \rightarrow \infty}\left(\int_0^1(f(x))^n\,\mathrm dx\right)^\frac{1}{n}if f:[0,1]\rightarrow(0,\infty) is a continuous function.





My attempt:



Say f(x) has a max. value M. Then \left(\int_0^1(f(x))^ndx\right)^\frac{1}{n}\leq\left(\int_0^1M^ndx\right)^\frac{1}{n} =M



I cannot figure out what to do next.


Answer



Your guess that it should be the maximum is a good guess. You have shown that the limit must be \leq M. We will now show that the limit must be greater than or equal to M-\epsilon for any \epsilon, from which you can conclude that the limit is indeed M.




Since f(x) is continuous, given \epsilon > 0, there exists a \delta such that
f(x) > M - \epsilon for all x \in (x_0 -\delta, x_0 + \delta). Hence, we have
\int_0^1 f(x)^n dx > \int_{x_0 - \delta}^{x_0 + \delta} f(x)^n dx > \int_{x_0 - \delta}^{x_0 + \delta} (M - \epsilon)^n dx = (M-\epsilon)^n 2\delta
Hence for any \epsilon > 0,
\left(\int_0^1 f(x)^n dx\right)^{1/n} > (M-\epsilon)(2 \delta)^{1/n}
Letting n \to \infty, we get what we want, i.e.,
\lim_{n \to \infty}\left(\int_0^1 f(x)^n dx\right)^{1/n} \geq (M-\epsilon)


Tuesday, November 21, 2017

logarithms - What, if anything, can be said about log(f(g(x))

Given that you can restrict f and g to any form (convex, monotonic, etc.) what can be said about \log(f(g(x))) (if anything)?



For context:




I am looking to consider replacing weight updates in neural network backpropagation with \log weight updates as a way to deal with vanishing gradients in long chains of partial derivatives. The form for a neural network looks like:



g(W_2g(W_1x)) = \hat{y}



With g and f as any arbitrary non-linear functions. During backpropagation you compute \Delta W_i = \frac{\partial L}{\partial W_i} which ends up looking like a large chain of partial derivatives \Delta W_i = \frac{\partial L}{\partial h}\frac{\partial h}{\partial a}\frac{\partial a}{\partial W_i}. Taking \log{\Delta W_i} allows you to add those partial derivatives together instead of multiplying, but you are left with \log{\Delta W_i} instead of \Delta W_i.



I think the question ultimate is about if it is possible to constrain the forward model in such a way (perhaps limiting it's expressiveness) that we might use \log{\Delta W_i} to update weights without needing to take e^{\log{\Delta W_i}}. One of my first thoughts was to take \log{\hat{y}} and sort of see what happens, but I realized I didn't know much about what I might be able to do with \log(f(g(x)).



I'm thinking there might be concepts like Jensen's Inequality but for composite functions and then we seek to minimize our loss function L as a upper or lower bound.

Monday, November 20, 2017

calculus - Evaluating limlimits_{ntoinfty} e^{-n} sumlimits_{k=0}^{n} frac{n^k}{k!}


I'm supposed to calculate:


\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}


By using W|A, i may guess that the limit is \frac{1}{2} that is a pretty interesting and nice result. I wonder in which ways we may approach it.



Answer



Edited. I justified the application of the dominated convergence theorem.


By a simple calculation,


\begin{align*} e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!} &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\ (1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\ &= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\ (2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\ &= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\ (3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du. \end{align*}


We remark that


  1. In \text{(1)}, we utilized the famous formula n! = \int_{0}^{\infty} t^n e^{-t} \, dt.

  2. In \text{(2)}, the substitution t + n \mapsto t is used.

  3. In \text{(3)}, the substitution t = n - \sqrt{n}u is used.

Then in view of the Stirling's formula, it suffices to show that


\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.



The idea is to introduce the function


g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u)


and apply pointwise limit to the integrand as n \to \infty. This is justified once we find a dominating function for the sequence (g_n). But notice that if 0 < u < \sqrt{n}, then


\log g_n (u) = n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u = -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}.


From this we have g_n (u) \leq e^{-u^2 /2} for all n and g_n (u) \to e^{-u^2 / 2} as n \to \infty. Therefore by dominated convergence theorem and Gaussian integral,


\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}.


complex analysis - Can we use analytic continuation to obtain sum_{n=1}^infty n = b, bneq -frac{1}{12}

Intuitive question



It is a popular math fact that the sum definition of the Riemann zeta function:
\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}
can be extended to the whole complex plane (except one) to obtain \zeta(-1)=-\frac{1}{12}. The right hand side for the above equation in -1 becomes the sum of the natural numbers so in some sense we have obtained a value for it. My question is: is this value depending on the choice of the Riemann zeta function as the function to be analytically continued, or do we always get -\frac{1}{12}?




Proper formulation



let (f_n:D\subset \mathbb{C}\rightarrow \mathbb{C})_{n\in \mathbb{N}} be a sequence of functions and a \in \mathbb{C} with \forall n\in \mathbb{N}: f_n(a) = n and
f(z):=\sum_{n=0}^\infty f_n(z)
convergent on a part of the complex plane, such that it can be analytically continued to a part of the plane that contains a. Does it then follow that, under this continuation, f(a)=-\frac{1}{12} and why (or can you give a counterexample)?



Examples




  • The case of the Riemann zeta function is the case where \forall n \in \mathbb{N}: f_n(s) = \frac{1}{n^s} and a=-1

  • The case where \forall n \in \mathbb{N}: f_n(z) = \frac{n}{z^n} and a=1 does yield the sum of all natural numbers but it's continuation \frac{z}{(z-1)^2} has a pole at a.

Sunday, November 19, 2017

elementary set theory - proof: The set N of natural numbers is an infinite set



DEFINITION 1: A set S is finite with cardinality n \in\mathbb N if there is a bijection from the set \left\{0, 1, ..., n-1 \right\} to S. A set is infinite if its not finite.



THEOREM 1: The set \mathbb N of natural numbers is an infinite set.



Proof: Consider the injection f :\mathbb N \rightarrow \mathbb N defined as f(x) = 3x. The range of f is a subset of the domain of f.



I understand that f(x) = 3x is not surjective and thus not bijective since for example the range does not contain number 2. But what would happen if we were to define f: \mathbb N\rightarrow \mathbb N as f(x) = x? It is a bijective function. Doesn't that make the set of natural numbers finite according to the definition? What am I missing can somebody please tell me?



Answer



No. The definition of finite is f:\{0,1,...,n-1\}\to S is bijective.



We know f:\mathbb N\to\mathbb N via f(n) = n is bijective, but this maps \mathbb N onto \mathbb N. It does not map \{0,1,...,n-1\} onto \mathbb N.



Basically, this prove \mathbb N is finite if \mathbb N is finite.


Saturday, November 18, 2017

Solve complex equation 5|z|^3+2+3 (bar z) ^6=0




I'm stuck in trying to solve this complex equation



5|z|^3+2+3 (\bar z)^6=0



where \bar z is the complex conjugate.
Here's my reasoning: using z= \rho e^{i \theta} I would write



5\rho^3+ 2 + 3 \rho^6 e^{-i6\theta} = 0 \\ 5\rho^3+ 2 + 3 \rho^6 (\cos(6 \theta) - i \cdot \sin(6 \theta)) = 0 \\



from where I would write the system




\begin{cases} 5\rho^3+ 2 + 3 \rho^6 \cos(6 \theta) = 0 \\ 3 \rho^6 \sin(6 \theta) = 0\end{cases}



But here I get an error, since, from the second equation, I would claim \theta = \frac{k \pi}{6} for k=0…5, but \theta = 0 means the solution is real and the above equation doesn't have real solutions…where am I mistaken?


Answer



Let w = \overline{z}^3. Then we have



5|w|+2+3w^2 = 0




As you point out, this constrains w = k or w = ki for real k.



Case 1. w = k



3k^2+5|k|+2 = 0



which yields no solutions since the left-hand-size is always positive.




Case 2. w = ki



-3k^2+5|k|+2 = 0



which yields k = \pm 2, so w = \pm 2i.



The rest is left as an exercise.


real analysis - If f(x+y)=f(x)+f(y) ,forall;x,yinBbb{R}, then if f is continuous at 0, then it is continuous on Bbb{R}.




I know that this question has been asked here before but I want to use a different approach. Here is the question.




A function f:\Bbb{R}\to\Bbb{R} is such that
\begin{align} f(x+y)=f(x)+f(y) ,\;\;\forall\;x,y\in\Bbb{R}\qquad\qquad\qquad(1)\end{align}
I want to show that if f is continuous at 0, it is continuous on \Bbb{R}.



MY WORK



Since (1) holds for all x\in \Bbb{R}, we let \begin{align} x=x-y+y\end{align}
Then,
\begin{align} f(x-y+y)=f(x-y)+f(y)\end{align}

\begin{align} f(x-y)=f(x)-f(y)\end{align}
Let x_0\in \Bbb{R}, \;\epsilon> and y=x-x_0,\;\;\forall\,x\in\Bbb{R}. Then,
\begin{align} f(x-(x-x_0))=f(x)-f(x-x_0)\end{align}
\begin{align} f(x_0)=f(x)-f(x-x_0)\end{align}
\begin{align} f(y)=f(x_0)-f(x)\end{align}



HINTS BY MY PDF:



Let x_0\in \Bbb{R}, \;\epsilon> and y=x-x_0,\;\;\forall\,x\in\Bbb{R}. Then, show that \begin{align} \left|f(x_0)-f(x)\right|=\left|f(y)-f(0)\right|\end{align}
Using this equation and the continuity of f at 0, establish properly that

\begin{align}\left|f(y)-f(0)\right|<\epsilon,\end{align}
in some neighbourhood of 0.



My problem is how to put this hint together to complete the proof. Please, I need assistance, thanks!


Answer



We want to show that



$$\forall \epsilon>0, \exists r>0:|x-y|

But f(x)-f(y)=f(x-y) because f(y)+f(x-y)=f(y+(x-y))=f(x) as you have noticed.




Now, take u=x-y. By continuity at 0, we can write:



$$\forall \epsilon>0, \exists r>0:|u-0|

It's easy to see that f(0)=0, because f(0)=f(0+0)=f(0)+f(0). Hence



$$\forall \epsilon>0, \exists r>0:|(x-y)-0| $$\forall \epsilon>0, \exists r>0:|x-y| Hence, f is continuous at any y \in \mathbb{R}.



independence - root of prime numbers are linearly independent over mathbb{Q}

How can we prove by mathematical induction that 1,\sqrt{2}, \sqrt{3}, \sqrt{5},\ldots, \sqrt{p_n} (p_n is the n^{\rm th} prime number) are linearly independent over the rational numbers ?



\underline{\text{base case (n=1)}}: 1,\sqrt{2} are linearly independent over the field \mathbb{Q} otherwise a1+b\sqrt{2}=0 \Leftrightarrow \sqrt{2}=-\frac{a}{b} which is absurd.




Then I am stuck.

Confused with imaginary numbers

In 9th grade I had an argument with my teacher that



{i}^{3}=i



where i=\sqrt{-1}



But my teacher insisted (as is the accepted case) that:




{i}^{3}=-i



My Solution:



{i}^3=(\sqrt{-1})^3



{i}^3=\sqrt{(-1)^3}



{i}^3=\sqrt{-1\times-1\times-1}




{i}^3=\sqrt{-1}



{i}^3=i



Generally accepted solution:



{i}^3=(\sqrt{-1})^3



{i}^3=\sqrt{-1}\times\sqrt{-1}\times\sqrt{-1}




{i}^3=-\sqrt{-1}



{i}^3=-i



What is so wrong with my approach? Is it not logical?



I am using the positive square root. There seems to be something about the order in which the power should be raised? There must be a logical reason, and I need help understanding it.

number theory - How to simpify the way of calculating (a^b bmod c) bmod d?



Right now I know the way to calculate a^b \bmod c which is in the answer of the following question.




Consider the case where c is much larger than d. So, the result of a^b \bmod c will be large compared with the final result.



Is there any way to get the final result without getting directly the result of a^b \bmod c which is large?


Answer



There is no known solution to this problem. If there were, it would be in use all over the world in implementations of the RSA algorithm for smart cards.



If d and c have a factor in common, then you can use this to speed up the computation -- and in fact this is routinely done in smart cards when the RSA private key is known. But otherwise, you just have to compute a^b \mod c and reduce the result modulo d.


Friday, November 17, 2017

complex analysis - Evaluating the improper integral int_{0}^{infty}{frac{x^2}{x^{10} + 1}mathrm dx}

I am trying to solve the following integral, but I don't have a solution, and the answer I am getting doesn't seem correct.



So I am trying to integrate this:



\int_{0}^{\infty}{\frac{x^2}{x^{10} + 1}\,\mathrm dx}



To integrate this, I want to use a contour that looks like a pizza slice, out of a pie of radius R. One edge of this pizza slice is along the positive x-axis, if that makes sense. Since z^{10} + 1 has 10 zeroes, the slice should only be one tenth of a whole circle. So let's call this contour C . Then:




\int_{C}{\frac{z^2}{z^{10} + 1}\,\mathrm dz} = 2 \pi i\,\operatorname{Res}(\frac{x^2}{x^{10} + 1}, e^{i \pi/10}) This is because this slice contains only one singularity. Furthermore:



\int_{C}{\frac{z^2}{z^{10} + 1}\,\mathrm dz} = \int_0^R{\frac{z^2}{z^{10} + 1}\,\mathrm dz} + \int_\Gamma{\frac{z^2}{z^{10} + 1}\,\mathrm dz}



And then, by the M-L Formula, we can say that \int_\Gamma{\frac{z^2}{z^{10} + 1}\,\mathrm dz} goes to 0 as R goes to infinity. Evaluating 2 \pi i\ \operatorname{Res}(\frac{x^2}{x^{10} + 1}, e^{i \pi/10}) I get \dfrac{\pi}{e^{i \pi/5}} . Since this answer isn't real, I don't think this could be correct. What did I do wrong?

functional equations - "Foldable" functions



Suppose f:2^X\to X satisfies f(x_1,\dots)=f(f(x_1,x_2),x_3,\dots). Min, max and sum are three such examples.




  1. I've been calling these functions "foldable" because they bear some similarity to that concept from programming, but is there a real name for them?


  2. Can anything interesting be said about them?



My motivation is driven by ethics and economics: if u is some utility function, we might regard u(x,y)=z as meaning that the basket \{x,y\} is equivalent to the basket \{z\}, so u would be foldable.


Answer



Any associative operator gives rise to (a family of) such functions, like \sum_{1 \le i \le n} x_i, \prod_{1 \le i \le n} x_i, \bigcap_{1 \le i \le n} x_i. Even the maximal common divisor and the minimal common multiple qualify. The cases \min and \max are just the natural extensions of those binary operations to several arguments.


sequences and series - How can I find a scaling factor for n cylinders so that the total volume, area and height converge at specific values?




I was working on a project in which I had to make a binary tree of cylinders. Essentially I just had to make a cylinder, then two smaller ones, then four even smaller ones and so on. The challenge was to model the lungs, so that the cylinders combined had a total volume V of 6L, a lateral surface area L of 70m2 and a total height h of 2400km.



I made an approximation in matlab through trial and error, playing around with multiple dividers in a for-loop and got close (V = 6.0002L, L = 70.133m2, h = 2398km). I've since been obsessing over it because I'm convinced there's a more elegant solution. A scaling factor that will make the each sum converge at exactly the right value.



I've been messing with it for a few days but I can't get it to work.



In most of my attempts, I can get two correct parameters and one that's off. So if V = 6L and h = 2400km, L will be off.



I made some sketches that explain the concept http://imgur.com/a/JbTuX. There's also a bit of math because my logic while drawing this was that the solution could be found using series, since I want each parameter to converge at a specific value.




The sketches don't show the scaling factor. I was hoping that in writing the series out I would spot something useful.. I didn't..


Answer



Introduce a scaling factor \alpha<1 per generation for the radius and a scaling factor \beta<{1\over2} per generation for the the height of the elementary cylinders. Let V_0 be the volume, L_0 be the lateral surface, and H_0 be the height of the starting cylinder. Denote by V_n, L_n, H_n the sum of the volumina, lateral surfaces, and heights in the n^{\rm th} generation, and finally by V, L, and H the ovaral sum of these quantities. Then one has the recursions
V_{n+1}=2\alpha^2\beta\> V_n\>,\qquad L_{n+1}=2\alpha\beta\> L_n\>,\qquad H_{n+1}=2\beta\> H_n\qquad(n\geq0)\ .
By the formula for the sum of geometric series it follows that
\eqalign{V&=\sum_{n=0}^\infty V_n={1\over 1-2\alpha^2\beta} V_0\>,\cr L&=\sum_{n=0}^\infty L_n={1\over 1-2\alpha\beta} L_0\>,\cr H&=\sum_{n=0}^\infty H_n={1\over 1-2\beta} H_0\>.\cr}\tag{1}
Now choose \alpha, \beta, V_0, L_0, and H_0 suitably in such a way that your requirements are met. Note that you cannot choose \alpha and \beta independently and arbitrarily since the initial variables V_0=\pi R_0^2H_0, , L_0=2\pi R_0H_0, and H_0 have to satisfy the identity L_0^2=4\pi V_0H_0.




If you envisage only N generations then the formulas (1) have to be replaced by
\eqalign{V&=\sum_{n=0}^{N-1} V_n={1-(2\alpha^2\beta)^N\over 1-2\alpha^2\beta} V_0\>,\cr L&=\sum_{n=0}^{N-1} L_n={1-(2\alpha\beta)^N\over 1-2\alpha\beta} L_0\>,\cr H&=\sum_{n=0}^{N-1} H_n={1-(2\beta)^N\over 1-2\beta} H_0\>.\cr}
In this case \alpha, \beta, and N should better not be unknowns in the design process, but parameters fixed in advance.


limits - How to compute lim_{xto 0}frac{sin(x^2+x)-x}{x^2} without L'Hospital's Rule or series?

I came across this problem in an examination for beginners in Calculus:




Compute \displaystyle \lim_{x\to 0}\frac{\sin(x^2+x)-x}{x^2}.




It is easy to show that the limit is 1 by using series or L'Hospital's Rule. But the examination is aimed to students that know only the basics of Calculus (derivatives of power functions and trigonometric functions, Chain Rule, Mean Value Theorem basically).



How to compute this limit by elementary means?

elementary number theory - Why do some divisibility rules work only in base 10?



Aside from the zero rule (a number ending with zero means it is divisible by the base number and its factors), why is it that other rules cannot apply in different bases?



In particular for 3 one can just sum all digits and see if it is divisible. What relation exists between 10 and 3 to have this property? Does this property exist in another base?



Also:
Are there divisibility rules for base 2?
Are there general divisibility formulae for all bases?



Answer



Two rules that work for any base b.



If n divides b-1, then if the sum of the digits is a multiple of n then the number is a multiple of n. (That's why the 3 and 9 rule work).



If the sum of the even place digits is a multiple of b+1 more or less than the sum of the odd place digits then the number is a multiple of b+1.



Proof outline:



Let x = \sum c_i*b^i be a number is base b. Then the sum of the digits is N=\sum c^i. So x - N = \sum c_i*(b^i - 1).




Each b^i - 1 = (b-1)(b^{i-1}+b^{i-2} + .... + b + 1) so x - N is a multiple of b - 1. So if x is multiple of b-1 the sum of the digits is also a multiple of b-1.



Same thing if x is a multiple of n\mid b-1.


calculus - Evaluating the limit: limlimits_{x to infty} sumlimits_{n=1}^{infty} (-1)^{n-1}frac{x^{2n-1}}{(2n)! log (2n)}



How to evaluate the limit: \lim\limits_{x \to \infty} \sum\limits_{n=1}^{\infty} (-1)^{n-1}\frac{x^{2n-1}}{(2n)!\log (2n)}



I have tried to break the sum into even and odd n and estimate each sum to be \sim \frac{e^{x}}{\log (4x)} by estimating the summations in the range \sum\limits_{x/c \le n \le cx} \frac{x^{4n}}{(4n+s)!\log (4n+s)} with the trivial bounds while e^{-x}\sum\limits_{n > cx} \frac{x^{4n}}{(4n+s)!\log (4n+s)} and e^{-x}\sum\limits_{0 \le n < x/c} \frac{x^{4n}}{(4n+s)!\log (4n+s)} both decay exponentially for arbitrary c > 1 as x \to +\infty (s=2,4). The precise estimates I have used are similar to here.


However, I suppose stronger asymptotics will be required for calculating the limit of their difference. Any help/hint is appreciated. Thanks!


I am not much acquainted with the Laplace method or probabilistic interpretations (I'd appreciate it some references are mentioned, should the answer involve advanced tools like them.)



Machinato suggests that f(z)=\sum_{n=2}^{\infty} \frac{(-1)^n z^{n-1}}{n! \log n} approaches \log z. This seems to be true! As evidence, here are plots of the real (red) and imaginary (blue) parts of f(e^{2+i \theta}), for -\pi<\theta<\pi. For comparison, the red and blue dashed lines are the real and imaginary parts of \log z = 2+i \theta. It's not clear from this image whether the limit holds for all \theta, but for -\pi/2 < \theta < \pi/2 the fit is very good.



enter image description here


I'm offering a bounty for a proof of this bizarre behavior. I imagine the precise formulation is that, for \theta fixed in (-\pi,\pi), we have \lim_{R \to \infty} f(R e^{i \theta}) - \log R = i \theta but I'll certain accept other asymptotic results of a similar flavor. (For \theta = \pi, it is easy to see that f(-R) grows faster than any power of R, so it can't mimic \log R.)


After some more experimentation, I suspect the limit only holds for -\pi/2<\theta<\pi/2. Here are plots of \mathrm{Im}(f(r e^{i \theta})) for \theta = \pi/3 (first plot) and 2 \pi/3 (second plot), with 0 < r < 10. In each case, the dashed line is at \theta. Convergence looks great in the first one, terrible in the second.


enter image description here


enter image description here


Answer



EDITED. It suffices to prove the following claim.



Proposition. Assume that f(z) = \sum_{n=2}^{\infty} a_n z^n has radius of convergence R and define


F(z) = \sum_{n=2}^{\infty} \frac{n a_n}{n-1} z^n = z \int_{0}^{z} \frac{f'(w)}{w} \, dw.



Then for all |z| < R, we have


\sum_{n=2}^{\infty} \frac{a_n}{\log n} z^n = \int_{0}^{1} \int_{0}^{\infty} \frac{t^{s-1}}{\Gamma(s)} F(z e^{-t}) \, dtds.



Proof. The proof is a straightforward computation:


\begin{align*} \int_{0}^{1} \int_{0}^{\infty} \frac{t^{s-1}}{\Gamma(s)} F(z e^{-t}) \, dtds &= \sum_{n=2}^{\infty} \frac{n a_n}{n-1} z^n \int_{0}^{1} \int_{0}^{\infty} \frac{t^{s-1}}{\Gamma(s)} e^{-nt} \, dtds \\ &= \sum_{n=2}^{\infty} \frac{n a_n}{n-1} z^n \int_{0}^{1} \frac{ds}{n^s} \\ &= \sum_{n=2}^{\infty} \frac{a_n}{\log n} z^n. \end{align*}



In our case, we can set f(z) = 1 - \cos z and consequently F(z) = z \operatorname{Si}(z), where \operatorname{Si}(z) is the sine integral. Then for real x, we have


\begin{align*} \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{(2n)!\log(2n)} x^{2n-1} &= \int_{0}^{1} \int_{0}^{\infty} \frac{t^{s-1}}{\Gamma(s)} e^{-t} \operatorname{Si}(x e^{-t}) \, dtds \\ &\hspace{3em} \xrightarrow[x\to\infty]{} \quad \frac{\pi}{2} \int_{0}^{1}\int_{0}^{\infty} \frac{t^{s-1}}{\Gamma(s)} e^{-t} \, dtds = \frac{\pi}{2} \end{align*}


where the limiting procedure can be easily justified by the dominated convergence theorem.


Similarly, setting f(z) = e^{-z} - 1 + z gives F(z) = z (\log z + \gamma + \Gamma(0,z)) for \operatorname{Re}(z) > 0, where \Gamma(s, z) is the upper incomplete gamma function. Plugging this back, we obtain


\sum_{n=2}^{\infty} \frac{(-1)^{n}}{n!\log n} z^{n-1} = \log z + \gamma - \frac{1}{2} + \int_{0}^{1} \int_{0}^{\infty} \frac{t^{s-1}}{\Gamma(s)} e^{-t} \Gamma(0, z e^{-t}) \, dtds.



Note that the last integral vanishes if we let z \to \infty along the cone |\arg(z)| \leq \frac{\pi}{2} - \delta. So this again confirms numerical observation made by Machinato.


Thursday, November 16, 2017

linear algebra - Can two matrices with the same characteristic polynomial have different eigenvalues?


The polynomial is -\lambda^3+3\lambda -2


which factorizes into (\lambda-1)(\lambda +1)(\lambda -2)


A matrix A has the above characteristic polynomial, and so its eigenvalues are 1, -1, and 2.


However, another matrix B, already in diagonal form, has the same characteristic polynomial, but with eigenvalues 1,1,-2, i.e., diagonal entries 1,1,-2.


Is this possible? Or have I gone wrong in my computations?


The problem statement does ask to show that the characteristic polynomials are the same but that the matrices A and B are not similar. So, perhaps I have found exactly what I needed, but it just seems weird...


Thanks,


Answer




-\lambda^3+3\lambda - 2 = -(\lambda-1)^2(\lambda+2) \neq -(\lambda-1)(\lambda+1)(\lambda-2).


real analysis - Proving continuity of a function using epsilon and delta

I've just got a real quick question about proving the continuity of a function using \epsilon and \delta definition of continuity. The question is this:



Let f\colon X\to R be continuous where X is some subset of R. Prove that the function 1/f\colon x\mapsto 1/f(x) is continuous at p in X, provided that f(p)\neq0.



The definition states that "A function f(x) is continuous at p iff for every \epsilon>0 there exists some \delta>0 such that
|x-p|<\delta and |f(x) -f(p)|<\epsilon



After that, I am super stuck...any help would be greatly appreciated.

Thanks!

Wednesday, November 15, 2017

integration - Evaluate int_0^{frac{π}{2}}tan (x)ln (sin x)ln (cos x)dx


Evaluate: I=\int_0^{\frac{π}{2}}\tan (x)\ln (\sin x)\ln (\cos x)dx



My ideas is to use the Fourier series of log sin and log cos:


\ln (2\sin x)=-\sum_{k=1}^{\infty}\frac{\cos (2kx)}{k} \ln (2\cos x)=-\sum_{k=1}^{\infty}\frac{(-1)^{k}\cos (2kx)}{k}


But my problem is that I find difficult integrals like:



\int\tan (x)\cos (2kx)dx


My another idea is:


Use the substation : y=\tan x then dx=\frac{dy}{1+y^2}


Then where x=0 \Rightarrow y=0 and for x=\frac{π}{2} \Rightarrow y=\infty


So:


I=\frac{1}{2}\int_0^{\infty}\frac{y\ln \left(\frac{y}{\sqrt{1+y^2}}\right)\ln (1+y^2)}{1+y^2}dy


But now I don't know how to complete.

definition - Not defining the imaginary number i as the principal square root of -1.




Background



I learned early on that it's important that we define the imaginary number i such that i^2 = -1, rather than i = \sqrt{-1}.



Question



I can't fully remember the reasoning for this important note, so I was wondering if someone could elaborate?



Own efforts




Any explanation I find eventually boils down to the same argument.




If we define i as the principal square root of -1, then we get



-1 = i^2 = \sqrt{-1}\sqrt{-1} \overbrace{=}^{\text{fallacy}} \sqrt{(-1)(-1)} = \sqrt1 = 1




But to me, this seems like wrongful use of the \sqrt{ab} = \sqrt a \sqrt b rule, since this rule comes with certain restrictions on a, b. So I don't see how this is a misuse of the definition of i.




Are there other reasons why we should be careful not to define i as the principal square root of -1?


Answer



If you define i as \sqrt{-1} then there is an obvious question: how do you know that -1 has some square root? Besides, writing i=\sqrt{-1} seems to imply that i is the square root of -1. But, in \mathbb C, -1 has two square roots: \pm i. Assuming that i is the square root of -1 leads to fallacies, such as the one the you mentioned.


summation - Solving sum of (-1)^n (1/2)^n




How to solve the following sum?



\sum_{n=0}^k (-1)^n (1/2)^n


Answer



use the geometric serie for |x|<1
\sum_{n=0}^{k }x^n=1+x+x^2+x^3+....=\frac{1-x^{k+1}}{1-x}



then use x=-1/2


Tuesday, November 14, 2017

real analysis - If f is continuous with $ int_0^{infty}f(t),dt


Let f:[0,\infty)\to [0,\infty) be a continuous function such that \displaystyle \int_0^{\infty}f(t)\,dt<\infty. Which of the following statements are true ?




(A) The sequence \{f(n)\} is bounded.




(B) f(n)\to 0 as n \to \infty.



(C) The series \displaystyle \sum f(n) is convergent.



I am unable to prove directly but I am thinking about the function f(x)=\frac{1}{1+x^2}. For this function all options are correct. Is it correct ? I think not , as I have no proof in general.



Please help by giving a proof or disprove the statements.

real analysis - Proof square root of 4 is not irrational.



I was recently working on a question essentially worded in the following way:





Where does a proof of \sqrt{4} being irrational fall apart when we try to apply the same principles used for proving that \sqrt{2} is irrational.




I attempted by making the same (in this case, intuitively correct) assumptions that led to a contradiction in the case of \sqrt{2}:




  1. \sqrt{4} is a rational number and can be written as \dfrac{m}{n} where n\neq0


  2. \dfrac{m}{n} is in lowest reduced terms; i.e. m and n are co-prime due to definition of rational numbers





Then I took the following steps:



m^2 = 4n^2



m^2 = 2(2n^2)



Thus, m^2 is even \implies m is even and can be written as 2k.



m^2 = 4k^2 = 4n^2




k = n



Thus, k is a factor of both m and n, contradicting the second assumption that I made (m and n are co-prime).



Although I understand intuitively that this is not the case, doesn't this show that \sqrt{4} is irrational?


Answer



You have proven that n = k and m = 2k. In the case that m and n are coprime, set k = 1.


divisibility - How come the number N! can terminate in exactly 1,2,3,4, or 6 zeroes but never 5 zeroes?











How come the number N! can terminate in exactly 1,2,3,4, or 6 zeroes but never 5 zeroes?


Answer



The number of zeros at the end of N! is given by \left \lfloor \frac{N}{5} \right \rfloor + \left \lfloor \frac{N}{5^2} \right \rfloor + \left \lfloor \frac{N}{5^3} \right \rfloor + \cdots where \left \lfloor \frac{x}{y} \right \rfloor is the greatest integer \leq \frac{x}{y}.



To make it clear, write N! as a product of primes N! = 2^{\alpha_2} 3^{\alpha_2} 5^{\alpha_5} 7^{\alpha_7} 11^{\alpha_{11}} \ldots where \alpha_i \in \mathbb{N}.



Note that \alpha_5 < \alpha_2, \forall N. (Why?)



The number of zeros at the end of N! is the highest power of 10 dividing N!




If 10^{\alpha} divides N! and since 10 = 2 \times 5, 2^{\alpha} | N! and 5^{\alpha} | N!. Further since \alpha_5 < \alpha_2, the highest power of 10 dividing N! is the highest power of 5 dividing N! which is \alpha_5.



So you will find that for N \leq 24, the number of zeros will be less than or equal to 4. However when N hits 25 you will get 2 additional zeros courtesy 25 since 25 \times 2^2 = 100. Hence, there will be a jump when you go from 24 to 25.



EDIT:



Note that there will be





  1. A jump of 1 zero going from (N-1)! to N! if 5 || N


  2. A jump of 2 zero going from (N-1)! to N! if 5^2 || N


  3. A jump of 3 zero going from (N-1)! to N! if 5^3 || N and in general


  4. A jump of k zero going from (N-1)! to N! if 5^k || N




where a || b means a divides b and gcd(a,\frac{b}{a}) = 1



EDIT




Largest power of a prime dividing N!



In general, the highest power of a prime p dividing N! is given by



s_p(N!) = \left \lfloor \frac{N}{p} \right \rfloor + \left \lfloor \frac{N}{p^2} \right \rfloor + \left \lfloor \frac{N}{p^3} \right \rfloor + \cdots



The first term appears since you want to count the number of terms less than N and are multiples of p and each of these contribute one p to N!. But then when you have multiples of p^2 you are not multiplying just one p but you are multiplying two of these primes p to the product. So you now count the number of multiple of p^2 less than N and add them. This is captured by the second term \displaystyle \left \lfloor \frac{N}{p^2} \right \rfloor. Repeat this to account for higher powers of p less than N.



In case of the current example, the largest prime dividing 10 is 5. Hence, the largest power of 10 dividing N! is the same as the largest power of 5 dividing N!.




Largest power of a prime dividing other related products



In general, if we want to find the highest power of a prime p dividing numbers like \displaystyle 1 \times 3 \times 5 \times \cdots (2N-1), \displaystyle P(N,r), \displaystyle \binom{N}{r}, the key is to write them in terms of factorials.



For instance, \displaystyle 1 \times 3 \times 5 \times \cdots (2N-1) = \frac{(2N)!}{2^N N!}. Hence, the largest power of a prime, p>2, dividing \displaystyle 1 \times 3 \times 5 \times \cdots (2N-1) is given by s_p((2N)!) - s_p(N!), where s_p(N!) is defined above. If p = 2, then the answer is s_p((2N)!) - s_p(N!) - N.



Similarly, \displaystyle P(N,r) = \frac{N!}{(N-r)!}. Hence, the largest power of a prime, dividing \displaystyle P(N,r) is given by s_p((N)!) - s_p((N-r)!), where s_p(N!) is defined above.



Similarly, \displaystyle C(N,r) = \binom{N}{r} = \frac{N!}{r!(N-r)!}. Hence, the largest power of a prime, dividing \displaystyle C(N,r) is given by s_p((N)!) - s_p(r!) - s_p((N-r)!), where s_p(N!) is defined above.


analysis - Injection, making bijection

I have injection f \colon A \rightarrow B and I want to get bijection. Can I just resting codomain to f(A)? I know that every function i...