# Vector spaces

So far we have talked about *geometric* vectors, which are like arrows with magnitude and direction. This is an intuitive geometric interpretation, and it’s useful for some things, but there’s a lot more to vectors than just that. In the broader context of vectors, we talk about *vector spaces*. There are many different vector spaces, but we will only worry about the Euclidean ones. A Euclidean vector is a collection of real numbers. In
$\displaystyle \mathbb{R}^{2}$ or two-space, there are two components; in
$\displaystyle \mathbb{R}^{3}$ or three-space, there are three.

The notation we use for a vector in two-space is $\displaystyle \vec{v} = \left [ \Delta{} x , \Delta{} y \right ]$, where the deltas go from the vector’s tail to its tip. Say we wanted to designate a vector from the origin to point P(3,−2). This is simply $\displaystyle \overrightharpoon{O P} = \left [ 3 , - 2 \right ]$. Since we often interpret our Euclidean vectors in the Cartesian coordinate system, we also call them Cartesian vectors.

What if you know points A and B, and want a vector that takes you from A to B? We can simplify this problem using the identity that we learned in the section on addition. Consider this:

$\displaystyle \overrightharpoon{A B} = \overrightharpoon{A O} + \overrightharpoon{O B} = - \overrightharpoon{O A} + \overrightharpoon{O B} = \overrightharpoon{O B} - \overrightharpoon{O A}$.

In words, you can get a vector from A to B by doing
$\displaystyle \overrightharpoon{P B} - \overrightharpoon{P A}$,
where P is some common reference point (usually the origin). You just need to remember that it’s *tip minus tail* and not the other way around.

How do we add or subtract Euclidean vectors? It’s actually very easy—just do the operation to each component separately:

$\displaystyle \left [ a , b \right ] + \left [ c , d \right ] = \left [ a + c , b + d \right ]$.

It works the same way for subtraction. Scalar multiplication distributes:

$\displaystyle k \left [ x , y \right ] = \left [ k x , k y \right ]$.

There are a few advantages of this over the geometric vectors we were using before. For one thing, it’s a lot less work. Also, it’s much easier to be precise: you could add a hundred vectors this way without breaking a sweat. Try that with geometric vectors—your answer will be buried so deep in trig functions and square roots that you will be forced to round off.

To get the magnitude of $\displaystyle \vec{v} = \left [ x , y \right ]$, you can use the Pythagorean theorem:

$\displaystyle \left \lvert \vec{v} \right \rvert = \sqrt{x^{2} + y^{2}}$.

You can find the direction of
$\displaystyle \vec{v}$
by drawing a triangle and using the inverse tangent function. Then, you can state the vector with the usual magnitude-direction representation—for example, [4m, 3m] becomes 5 m [E 30º N]. Going the other way (from magnitude-direction to *x* and *y* components) is called *resolving* the vector, and you can do it by sketching a right triangle and using sine and cosine.

All of this is straightforward in $\displaystyle \mathbb{R}^{3}$ as well. You just use one more component. Addition, subtraction, and scalar multiplication work the same. To get the magnitude, just include $\displaystyle z^{2}$ in the sum.

Normalizing a geometric vector is easy: just change the magnitude to 1. It’s a little bit more work with these Euclidean vectors. Say we want the unit vector parallel to $\displaystyle \vec{v} = \left [ 3 , 5 , 6 \right ]$:

$\displaystyle \hat{v} = \frac{\vec{v}}{\left \lvert \vec{v} \right \rvert} = \frac{\left ( \begin{matrix} 3 & 5 & 6 \end{matrix} \right )}{\sqrt{3^{2} + 5^{2} + 6^{2}}} = \frac{1}{\sqrt{70}} \left [ 3 , 5 , 6 \right ]$.

That’s it. You can distribute the coefficient if you want, but there is really no need—why write $\displaystyle \sqrt{70}$ three times instead of once?