Men on the Chessboard

What are the men on the chessboard telling us to do? Things are made a bit more difficult by the fact that they don’t speak.

All we can do is watch them.

Part of the trick might be noticing that they are getting the rules they use from the definitions of things.

Assume you are given a space and you are tasked with confirming that it is an Inner Product Space. The other side of the coin is, if you are told something is an inner product, then all those rules are tools you can use. Below are three rules from an inner product space that we will use in the example that follows:

  • x,y,z are vectors
  • s is a scalar
    • scaled vectors are vectors, so sx,sy,sz are vectors
  • <x,y> = <y,x> [commutation]
  • <sx,y> = s<x,y> [homogeneity]
    • since <x,sy> = <sy,x> by commutation, we can do <sy,x> = s<y,x> and commutation gives s<y,x> = s<x,y>, therefore
    • <x,sy> = s<x,y>
    • <x+y,z> = <x,z> + <y,z> [additivity]

In one example we are given a vector \overrightarrow v with basis vectors e_1, e_2, e_3.

<\overrightarrow v, e_1> = <\alpha^1e_1 + \alpha^2e_2 + \alpha^3e_3, e_1>

Part of the trick here is to remember that every pairing of a component and a basis vector is a scaled vector, and thus, it is a vector. We can look at the above as being something that fits the template <x+y,z> = <x,z> + <y,z>.

The next step we do is NOT an inner product calculation. Rather, we are using additivity:

<\overrightarrow v, e_1> = <\alpha^1e_1, e_1> + <\alpha^2e_2, e_1> + <\alpha^3e_3, e_1>

Next we use homogeneity:

<\overrightarrow v, e_1> = \alpha^1<e_1, e_1> + \alpha^2<e_2, e_1> + \alpha^3<e_3, e_1>

We can now start doing inner product calculations.

If all three vectors e_1, e_2, e_3 are mutually orthogonal (any vector is orthogonal to the other two) then two of the above terms will drop out because <e_2, e_1>=0 and <e_3, e_1>=0.

<\overrightarrow v, e_1> = \alpha^1<e_1, e_1>

Recognizing that something is a number is helpful because we can move it around (“constants move with impunity”). We can divide both sides by <e_1, e_1> :

\dfrac {<\overrightarrow v, e_1>} {<e_1, e_1>} = \alpha^1

One takeaway — we couldn’t get to this if we were working with basis vectors that were not mutually orthogonal. This might provide incentive to transform if a problem starts us in coordinates that are not orthogonal.

Rules so far:

  • Study the rules that make spaces (vector spaces, inner product spaces, tensor spaces, etc.)
  • See if you can get to orthogonal vectors if you don’t have mutual orthogonality in your basis vectors.