This is a brief tutorial to cover the basics of index notation which are useful when handling complicated expressions involving cross and dot products.
I will skip over a lot of technicalities (such as covariant and contravariant vectors) and focus on 3 dimensions - but all of what I say here can easily be generalised and extended, and I encourage anyone with the background to do so.
Conventions and notation:
I use bold symbols to indicate vectors (invariably 3 dimensional) and use
as the unit vectors in the x, y and z directions.
The problem:
Suppose we have some complicated expression for example
where
is some 3 dimensional vector and
is some scalar function, and we want to write it in a simpler form.
There are formulas for this sort of thing such as the BAC CAB rule:
but these are derived using commuting vectors, and so if we use
, since
does not equal
However index notation provides a quick and easy way to derive these types of expressions.
Indicies and the summation convention
Indicies allow us to rewrite an expression component by component. For example
So
(Clearly this can be generalised to any number of components)
Now for compactness we introduce the (Einstein) summation convention: If an index is repeated we sum over it. So
This cuts down on a lot of writing. Note that there must be conservation of unpaired indicies, for example
is a fine expression - it says the ith component of A is the ith component of D (pre)multiplied by the dot product of B and C, that is
However
only makes sense if it means that all components of A are the same. Even then this is bad notation and it is much better to use
where
If we stick to this kind of convention we always get the same unpaired indicies on either side of an expression, in the above case i.
An expression like
makes no sense, with this convention. If you are evaluating an expression such as
you must use
different indicies so
.
Finally note paired indicies are
dummy indicies. We can change them (if we change both of them) to whatever we want (providing what we change it to is not already being used) without altering the result (because they are summed over). Unpaired indicies are not dummy indicies.
So we can write:
(where I take (cats) to represent a single variable) but
NOT
or
(Note in the first of the two wrong expressions the right hand side has an index 3 times, so must be wrong, and in the second expression the unpaired index is not conserved - i is on the left hand side but not the right, so it too must be wrong).
So that's a lot of boring detail without much gain, but stick on we'll get there
Multiple indicies and Symmetry
It's often useful to have expressions with multiple indicies (these represent tensors, in general). If we stick to indicies only taking values 1,2,3 then a multiple index object
represents the elements of a 3x3 matrix (the ith row and the jth column).
If we have two matricies A and B, then their product is (by definition)
Objects with more than 2 indicies are not as easy to interpret, so I won't, I'll just use them.
An object with 2 or more indicies is symmetric if it is unchanged under interchange of two indicies, e.g.
is symmetric, as is
Note that, if we view
as the i-jth matrix element then
is the i-jth element of the transpose. So a 2 index object is symmetric iff it corresponds to a symmetric matrix.
An antisymmetric object is one that changes sign every time two indicies are interchanged, e.g.
and
(note that the 2nd and 3rd term in the latter expression correspond to interchanging TWO indicies, so the two negative signs cancel).
Finally if a symmetric object is contracted (i.e. summed over 2 or more indicies) with an antisymmetric object it is zero. By this I mean if S is symmetric and A is antisymmetric then
where in the last step I have renamed the dummy indicies - switching i and j. So
(This also implies
and similarly, all we need is two indicies summed over for the argument to work)
Kronecker Delta and Levi-Civita Symbols
It is handy to use the symbols
Kronecker delta:
which is 1 if i=j and 0 otherwise.
The Kronecker delta is symmetric
and corresponds to the matrix elements of the Identity matrix (diag{1,1}).
So
is equal to
when i=j and 0 otherwise. So
(A common mistake is to say
but this is wrong. Why?
)
Levi-Civita symbol
which is 1 if ijk=123 or 312 or 231 and -1 if ijk=132 or 213 or 321 and 0 otherwise. (Sorry for writing this out so horribly).
The Levi-Civita symbol is antisymmetric:
The Levi-Civita symbol is related to the Kronecker Delta:
although I have not found this expression to be too useful in practice, setting i=l gives a very useful expression:
(Note the positive delta terms occur between indicies on the left hand side in the same place of the Levi-Civita symbol, and the negative terms between opposite places).
From this you can derive expressions for more summed indicies, such as:
And
The Levi-Civita symbol is useful because of its relation to the cross product:
and more importantly:
That pretty much covers everything we're going to need.
Evaluating Expressions
Let's start with a very easy one:
This is well known, but provides an easy check:
Now
is antisymmetric under interchange of j and k, but since
the product
is symmetric under interchange of j and k. So the whole expression is zero.
What about the BAC CAB rule?
That is dropping indicies:
This may look a tad messy, but it is much quicker than the normal way of doing this - expanding it out component by component.
I will write
So let's try a slightly harder one
or:
(I have suppressed most of the detail here - once you get the hang of it you should be able to see these steps straight off, but for now, work them through it in detail).
I will do one more example, an identity I doubt you'd find in most books and would have to derive for yourself anyway:
Where the last step follows from the product rule for derivatives. Note that
(assuming V is a sufficiently nice function - that is it is harmonic. This assumption is ok most of the time.) Consequently
(Why?)
So
(Again: work through it)
Finally I would like to point out that this is extremely powerful on non-commuting linear operators (see: Quantum Mechanics - particularly useful in deriving commutators) and is a prelude to the notation that is used in relativity.
Do a few examples, you'll find once you get the hang of it you can derive identities very quickly.