## Introduction

In Machine Learning, vectors can be used as a good way to represent numeric data. When using vectors, we can meet the following basic operations:

- Add two vectors
- Subtract two vectors
- Multiply a vector with a scalar (i.e., a number)
- Norm (i.e., magnitude or length) of a vector
- Dot product of two vectors

In this article, I will introduce basic operations on vector objects through three approaches:

- The simplest approach is to represent vectors as lists of numbers in Python
- The second approach is to use the
`Numpy`

library in Python - The last approach is to use the
`TensorFlow`

## Background

### Vectors as Lists of Numbers

In this approach, we will represent vectors as lists of numbers in Python. For example, we can create two vectors (`v`

and `w`

) as two lists of numbers:

```
v = [1, 2]
w = [2, 3]
```

Two vectors can be added together to form a new vector. The result of adding `v`

and `w`

:

`v + w = [3, 5]`

In Python, we can use the zip method to add two vectors:

```
def vector_add(v, w):
"""adds corresponding elements"""
return [v_i + w_i
for v_i, w_i in zip(v, w)]
```

Similarly, two vectors can be subtracted together to form a new vector:

`v – w = [-1, -1]`

In Python:

```
def vector_subtract(v, w):
"""subtracts corresponding elements"""
return [v_i - w_i
for v_i, w_i in zip(v, w)]
```

A vector can also be multiplied by scalars to form a new vector, for example:

`3*v = [3, 6]`

In Python:

```
def scalar_multiply(c, v):
"""c is a number, v is a vector"""
return [c * v_i for v_i in v]
```

The dot product of two vectors is the sum of their componentwise products. The dot product of `v`

and `w`

:

`v.w = 1*2 + 2*3 = 8`

In Python:

```
def dot(v, w):
"""v_1 * w_1 + ... + v_n * w_n"""
return sum(v_i * w_i
for v_i, w_i in zip(v, w))
```

with the **dot** function above, we can use it to compute magnitude (or length) of a vector. Magnitude is also called the norm of a vector. The following function (named `norm`

) computes the norm of a vector:

```
import math
def norm(v):
return math.sqrt(dot(v,v))
```

### Using the NumPy Library

The other approach is to use the `NumPy`

library, which includes a lot of functions that support basic operations on the vectors. If we create the vectors as lists of numbers but we must convert them to arrays because the functions of the `NumPy`

work with array objects only.

Before using the `NumPy`

library, we must:

`import numpy as np`

The functions like `vector_add`

, `vector_subtract`

, `scalar_multiply`

, `dot`

and `norm`

** **can be re-written as follows:

```
def vector_add(v, w):
"""adds corresponding elements"""
return np.array(v) + np.array(w)
def vector_subtract(v, w):
"""subtracts corresponding elements"""
return np.array(v) - np.array(w)
def scalar_multiply(c, v):
"""c is a number, v is a vector"""
return c*np.array(v)
def dot(v, w):
"""v_1 * w_1 + ... + v_n * w_n"""
return np.dot(np.array(v),np.array(w))
def norm(v):
return np.linalg.norm(np.array(v))
```

### Using the TensorFlow

`TensorFlow`

is an open source library that is developed by Google Brain team and it was released in November 2015. Before working with the `TensorFlow`

, we need to understand the following basic concepts:

`Graph`

: layout of learning process and it does not include data`Data`

: examples that are used to train and it has two kinds are inputs and targets`Session`

: where we feed the graph with data or`Session = Graph + Data`

. We can do this by using`placeholders`

– gates where introduce examples

The operations on vectors can be implemented by using the functions from the `TensorFlow`

library:

```
# creating the Graph
vec_1 = tf.placeholder(tf.float32)
vec_2 = tf.placeholder(tf.float32)
scalar = tf.placeholder(tf.float32)
vector_add = tf.add(vec_1,vec_2)
vector_subtract = tf.subtract(vec_1,vec_2)
scalar_multiply = tf.multiply(scalar,vec_1)
norm = tf.norm(vec_1)
dot = tf.tensordot(vec_1, vec_2, 1)
```

We can feed the graph with data through the session:

```
#############DATA
v = [1,2]
w = [2,3]
c = 3
##########SESSION
with tf.Session() as sess:
result_add = sess.run(vector_add, feed_dict={vec_1:v,vec_2:w})
result_sub = sess.run(vector_subtract, feed_dict={vec_1:v,vec_2:w})
result_mul = sess.run(scalar_multiply, feed_dict={scalar:c,vec_1:v})
result_norm = sess.run(norm , feed_dict={vec_1:v})
result_dot = sess.run(dot, feed_dict={vec_1:v,vec_2:w})
```

## Using the Code

To test the first approach, we can create a file named *vectors_lists.py*:

```
import math
#########VECTORS AS LISTS##########
def vector_add(v, w):
"""adds corresponding elements"""
return [v_i + w_i
for v_i, w_i in zip(v, w)]
def vector_subtract(v, w):
"""subtracts corresponding elements"""
return [v_i - w_i
for v_i, w_i in zip(v, w)]
def scalar_multiply(c, v):
"""c is a number, v is a vector"""
return [c * v_i for v_i in v]
def dot(v, w):
"""v_1 * w_1 + ... + v_n * w_n"""
return sum(v_i * w_i
for v_i, w_i in zip(v, w))
def norm(v):
return math.sqrt(dot(v,v))
##########DATA#############
v = [1,2]
w = [2,3]
scalar = 3
#########OUTPUT##########
print(vector_add(v,w))
print(vector_subtract(v,w))
print(scalar_multiply(scalar,v))
print(norm(v))
print(dot(v,w))
```

if running this file, the result can look like this:

```
[3, 5]
[-1, -1]
[3, 6]
2.23606797749979
8
```

To test the second approach, we can create a file named *vectors_numpy.py*:

```
import numpy as np
#########VECTORS AND NUMPY##########
def vector_add(v, w):
"""adds corresponding elements"""
return np.array(v) + np.array(w)
def vector_subtract(v, w):
"""subtracts corresponding elements"""
return np.array(v) - np.array(w)
def scalar_multiply(c, v):
"""c is a number, v is a vector"""
return c*np.array(v)
def dot(v, w):
"""v_1 * w_1 + ... + v_n * w_n"""
return np.dot(np.array(v),np.array(w))
def norm(v):
return np.linalg.norm(np.array(v))
##########DATA#############
v = [1,2]
w = [2,3]
scalar = 3
#########DISPLAY VECTORS ##########
print(vector_add(v,w).tolist())
print(vector_subtract(v,w).tolist())
print(scalar_multiply(scalar,v).tolist())
print(norm(v))
print(dot(v,w))
```

The result looks like this:

```
[3, 5]
[-1, -1]
[3, 6]
2.23606797749979
8
```

And to test the last approach, we can create a file named *vectors_tensorflow.py*:

```
import tensorflow as tf
############GRAPH
vec_1 = tf.placeholder(tf.float32)
vec_2 = tf.placeholder(tf.float32)
scalar = tf.placeholder(tf.float32)
vector_add = tf.add(vec_1,vec_2)
vector_subtract = tf.subtract(vec_1,vec_2)
scalar_multiply = tf.multiply(scalar,vec_1)
norm = tf.norm(vec_1)
dot = tf.tensordot(vec_1, vec_2, 1)
#############DATA
v = [1,2]
w = [2,3]
c = 3
##########SESSION
with tf.Session() as sess:
result_add = sess.run(vector_add, feed_dict={vec_1:v,vec_2:w})
result_sub = sess.run(vector_subtract, feed_dict={vec_1:v,vec_2:w})
result_mul = sess.run(scalar_multiply, feed_dict={scalar:c,vec_1:v})
result_norm = sess.run(norm , feed_dict={vec_1:v})
result_dot = sess.run(dot, feed_dict={vec_1:v,vec_2:w})
###########OUTPUT
print(result_add.tolist())
print(result_sub.tolist())
print(result_mul.tolist())
print(result_norm)
print(result_dot)
```

The result:

[3.0, 5.0] [-1.0, -1.0] [3.0, 6.0] 2.236068 8.0

## Points of Interest

Starting from simplest things is one of the best ways to learn the `TensorFlow`

. By using different approaches with operations on vector objects, I hope you (and me both) – `TensorFlow`

beginners – will understand how to use the `TensorFlow`

before using it for more complex tasks in the future.

## References

- [1] Alejandro Solano - EuroPython 2017, Introduction to TensorFlow
- [2] Joel Grus, Data Science from Scratch
- [3] tensorflow.org

## History

- 19
^{th }January, 2019: Initial version