Tuesday, October 31, 2017

linear algebra - Intuition behind symmetric and antisymmetric tensors



I've been studying multilinear algebra on Kostrikin's "Linear Algebra and Geometry" and he says the following. If $V$ is a linear space, $T^q_0(V)=V^{\otimes q}$ and if $f_\sigma :T^{q}_0(V)\to T^q_0(V)$ is given by:



$$f_\sigma(v_1\otimes\cdots\otimes v_q)=v_{\sigma(1)}\otimes\cdots \otimes v_{\sigma(q)}$$




Then the subspace of symmetric tensors is generated by the map



$$\frac{1}{q!}\sum_{\sigma\in S_q}f_\sigma:T^q_0(V) \to T^q_0(V)$$



I understood his proof of this. However I can't see the intuition behind it. What's the motivation of summing all those functions and why we must include that factor $1/q!$ it's what I don't understand. The same happens with the alternation operator, which he defines as:



$$\frac{1}{q!}\sum_{\sigma \in S_q}\operatorname{sgn}(\sigma)f_\sigma : T^q_0(V) \to T^q_0(V)$$



Again he proves that the image of this things is really the space of antysimmetric tensors. However again, what's the intuition behind this? How can we see intuitively that any symmetric tensor can be written as an arbitrary tensor under the action of this function. And again, I can't understand why this factor with $q!$ comes again.




I believe there is some intuition, although the author doesn't tell. It's like the tensor product, the intuition is that we want to construct a pair $(S,T)$ of a vector space $S$ and a multilinear function $T$ such that the pair possess the universal property. So we do all that job with the free vector space and there are good motivations, good reasons to construct things like they are. It's something that after we read about the motivations and so on we can think: "those are good reason, I can see why someone ever thought of them!". I believe there must be good reasons too for the introduction of these operators as they are.



How can we see intuitively that it must be like that? Can someone help with this?



Thanks very much in advance!


Answer



For the start let's have a look at matrices.



Let $A \in \mathbb{R^{n\times n }}$ be any matrix.




Than $\text{sym}(A)=\frac{1}{2}(A+A^T)$ is it's symetric part and $\text{skew}(A)=\frac{1}{2}(A-A^T)$ its antisymetric part. Note that any matrix can be written as sum of its symetric and antisymetric part $A=\text{sym}(A) + \text{skew}(A)$



Now I explain why there is the factor $\frac{1}{2}$.
If matrix $A$ is already symetric you want to have $\text{sym}(A) = A$. Without the factor you would get $\text{sym}(A) = 2A$. Similar for antisymetric part.



You can think about matrices as bilinear forms, $x^T A y = A(x,y)$. Matrix $A$ is symetric iff $A(x,y)=A(y,x)$ for all $x,y$. This brings as to idea to what symetric tensor might be. That $T$ multilinear form(or tensor) is symetric iff $T(...,x,...,y,...) = T(...,y,...,x,...)$ for all $x,y$ and for all positions where you preform the 'swap'. As with matrices you can take symetric part of tensor $T$:



$$\text{sym}(T)(x_1,...,x_n) = \frac{1}{n!} \sum_{\sigma\in S_n} T(x_{\sigma(1)},..,x_{\sigma(n)})$$



Again natural requirement is if you have symetric tensor $T$ than $\text{sym}( T)=T$, this explains the factor $\frac{1}{n!}$




And why bother with symetric and antisymetric tensors? Well I'll give few remarks from my studium.



In physics you encounter tensors quite a lot and they are often symetric or anytisymetric, like electromagnetic tensor is antisymmetric.
Or in general relativity you often caclulate something like this $\sum_{ij} A_{ij}B_{ij}$. When $A$ is symmetric than $\sum_{ij} A_{ij}B_{ij}=\sum_{ij} A_{ij}\text{sym}(B)_{ij}$ which is useful.



Or in differential geometry you define differential forms which are antisymetric and thay are of great importance.



I hope I shed a little bit of light on this topic.


No comments:

Post a Comment

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...