If you were to execute the above in a Python script or shell, it would run, but it wouldn’t actually do anything.

Remember- this is just the definition part of the process.

To get a brief taste of what running a graph looks like, we could add the following two lines at the end to get our graph to output the final node:sess = tf.

Session()sess.

run(e)If you ran this in an interactive environment, such as the python shell or theJupyter/iPython Notebook, you would see the correct output:…>>> sess = tf.

Session()>>> sess.

run(e)Data TypesTensors have a data type.

The basic units of data that pass through a graph are numerical, Boolean, or string elements.

When we print out the Tensor object c from our last code example, we see that its data type is a floating-point number.

Since we didn’t specify the type of data, TensorFlow inferred it automatically.

For example, 9 is regarded as an integer, while anything with a decimal point, like 9.

1, is regarded as a floating-point number.

We can explicitly choose what data type we want to work with by specifying it when we create the Tensor object.

We can see what type of data was set for a given Tensor object by using the attribute dtype:/** data_types.

py **/c = tf.

constant(9.

0, dtype=tf.

float64)print(c)print(c.

dtype)Out:Tensor(“Const_10:0”, shape=(), dtype=float64)<dtype: ‘float64’>ConstantsIn TensorFlow, constants are created using the function constant, which has the signature constant(value, dtype=None, shape=None, name='Const', verify_shape=False), where value is an actual constant value which will be used in further computation, dtype is the data type parameter (e.

g.

, float32/64, int8/16, etc.

), shape is optional dimensions, name is an optional name for the tensor, and the last parameter is a boolean which indicates verification of the shape of values.

If you need constants with specific values inside your training model, then the constant object can be used as in following example:z = tf.

constant(5.

2, name="x", dtype=tf.

float32) Tensor shapeThe shape of a tensor is the number of elements in each dimension.

TensorFlow automatically infers shapes during graph construction.

The shape of a tensor, describes both the number of dimensions in a tensor as well as the length of each dimension.

Tensor shapes can either be Python lists or tuples containing an ordered set of integers: there are as many numbers in the list as there are dimensions, and each number describes the length of its corresponding dimension.

For example, the list [3, 4] describes the shape of a 3-D tensor of length 3 in its first dimension and length 4 in its second dimension.

Note that either tuples (()) or lists ([]) can be used to define shapes.

Let’s take a look at more examples to illustrate this further:/** tensor_shapes.

py **/# Shapes that specify a 0-D Tensor (scalar) # e.

g.

any single number: 7, 1, 3, 4, etc.

s_0_list = []s_0_tuple = ()# Shape that describes a vector of length 3# e.

g.

[1, 2, 3]s_1 = [3]# Shape that describes a 3-by-2 matrix# e.

g [[1 ,2],# [3, 4],# [5, 6]]s_2 = (3, 2)We can assign a flexible length by passing in None as a dimension’s value.

Passing None as a shape will tell TensorFlow to allow a tensor of any shape.

That is, a tensor with any amount of dimensions and any length for each dimension:# Shape for a vector of any length:s_1_flex = [None]# Shape for a matrix that is any amount of rows tall, and 3 columns wide:s_2_flex = (None, 3)# Shape of a 3-D Tensor with length 2 in its first dimension, and variable-# length in its second and third dimensions:s_3_flex = [2, None, None]# Shape that could be any Tensors_any = NoneThe tf.

shape Op can be used to find the shape of a tensor if any need to in your graph.

It simply takes in the Tensor object you’d like to find the shape for, and returns it as an int32 vector:import tensorflow as tf# …create some sort of mystery tensor# Find the shape of the mystery tensorshape = tf.

shape(mystery_tensor, name=”mystery_shape”)Tensors are just a superset of matrices!tf.

shape, like any other Operation, doesn’t run until it is executed inside of a Session.

NamesTensor objects can be identified by a name.

This name is an intrinsic string name.

As with dtype, we can use the .

name attribute to see the name of the object:/** names.

py **/with tf.

Graph().

as_default():c1 = tf.

constant(4,dtype=tf.

float64,name=’c’)c2 = tf.

constant(4,dtype=tf.

int32,name=’c’)print(c1.

name)print(c2.

name)Out:c:0c_1:0The name of the Tensor object is simply the name of its corresponding operation (“c”; concatenated with a colon), followed by the index of that tensor in the outputs of the operation that produced it — it is possible to have more than one.

Name scopesIn TensorFlow, large, complex graph could be grouped together, so as to make it easier to manage.

Nodes can be grouped by name.

It is done by using tf.

name_scope(“prefix”) Op together with the useful with clause.

/** name_scopes.

py **/with tf.

Graph().

as_default():c1 = tf.

constant(4,dtype=tf.

float64,name=’c’)with tf.

name_scope(“prefix_name”):c2 = tf.

constant(4,dtype=tf.

int32,name=’c’)c3 = tf.

constant(4,dtype=tf.

float64,name=’c’)print(c1.

name)print(c2.

name)print(c3.

name)Out:c:0prefix_name/c:0prefix_name/c_1:0In this example we’ve grouped objects contained in variables c2 and c3 under the scope prefix_name, which shows up as a prefix in their names.

Prefixes are especially useful when we would like to divide a graph into subgraphs with some semantic meaning.

Feed dictionaryFeed is used to temporarily replace the output of an operation with a tensor value.

The parameter feed_dict is used to override Tensor values in the graph, and it expects a Python dictionary object as input.

The keys in the dictionary are handles to Tensor objects that should be overridden, while the values can be numbers, strings, lists, or NumPy arrays (as described previously).

feed_dict is also useful for specifying input values.

Note : The values must be of the same type (or able to be converted to the same type) as the Tensor key.

Let’s show how we can use feed_dict to overwrite the value of a in the previous graph:/** feed_dict.

py **/import tensorflow as tf# Create Operations, Tensors, etc (using the default graph)a = tf.

add(2, 5)b = tf.

mul(a, 3)# Start up a `Session` using the default graphsess = tf.

Session()# Define a dictionary that says to replace the value of `a` with 15replace_dict = {a: 15}# Run the session, passing in `replace_dict` as the value to `feed_dict`sess.

run(b, feed_dict=replace_dict) # returns 45# Open Sessionsess = tf.

Session()# Run the graph, write summary statistics, etc.

…# Close the graph, release its resourcessess.

close()VariablesTensorFlow uses special objects called Variables.

Unlike other Tensor objects that are “refilled” with data each time we run a session.

They can maintain a fixed state in a graph.

Variables like other Tensors, can be used as input for other operations in the graph.

Using Variables is done in two stages.

First the tf.

Variable() function is called in order to create a Variable and define what value it will be initialized with.

An initialization operation is perfomed by running the session with the tf.

global_variables_initializer() method, which allocates the memory for the Variable and sets its initial values.

Like other Tensor objects, Variables are computed only when the model runs, as we can see in the following example:/** variable.

py **/init_val = tf.

random_normal((1,5),0,1)var = tf.

Variable(init_val, name=’var’)print(“pre run:!.{}”.

format(var))init = tf.

global_variables_initializer()with tf.

Session() as sess:sess.

run(init)post_var = sess.

run(var)print(“.post run:.{}”.

format(post_var))Out:pre run:Tensor(“var/read:0”, shape=(1, 5), dtype=float32)post run:[[ 0.

85962135 0.

64885855 0.

25370994 -0.

37380791 0.

63552463]]Note that if we run the code again, we see that a new variable is created each time, as indicated by the automatic concatenation of _1 to its name:pre run:Tensor(“var_1/read:0”, shape=(1, 5), dtype=float32)Note: To reuse the same variable, we can use the tf.

get_variables() function instead of tf.

Variable().

PlaceholdersPlaceholders are structures designated by TensorFlow for feeding input values.

They can be also thought of as empty Variables that will be filled with data later on.

They are used by first constructing our graph and only when it is executed feeding them with the input data.

Placeholders have an optional shape argument.

If a shape is not fed or is passed as None, then the placeholder can be fed with data of any size:ph = tf.

placeholder(tf.

float32,shape=(None,10)) Whenever a placeholder is defined, it must be fed with some input values or else an exception will be thrown.

/** placeholders.

py **/import tensorflow as tfx = tf.

placeholder("float", None)y = x * 2with tf.

Session() as session: result = session.

run(y, feed_dict={x: [1, 2, 3]}) print(result) First, we import tensorflow as normal.

Then we create a placeholder called x, i.

e.

a place in memory where we will store value later on.

Then, we create a Tensor called, which is the operation of multiplying x by 2.

Note that we haven’t defined any initial values for x yet.

We now have an operation (y) defined, and can now run it in a session.

We create a session object, and then run just the y variable.

Note that this means, that if we defined a much larger graph of operations, we can run just a small segment of the graph.

This subgraph evaluation is actually a bit selling point of TensorFlow, and one that isn’t present in many other libraries that do similar things.

Running y requires knowledge about the values of x.

We define these inside the feed_dictargument to run.

We state here that the values of x are [1, 2, 3].

We run y, giving us the result of [2, 4, 6].

ConclusionTensorFlow is a powerful framework that makes working with mathematical expressions and multi-dimensional arrays a breeze — something fundamentally necessary in machine learning.

We have covered the basics of TensorFlow, this will get us started on our journey into the TensorFlow land.

In my subsequent tutorials to come, we will see how to leverage the TensorFlow library to solve optimization problems and making a predictive analysis equation.

We will also train a model to solve the XOR problem using the Linear Regression equation and Logistic Regression.

Thanks for reading, and please feel free to comment.Cheers ????Learn moreHow to Share React UI Components between Projects and AppsA simple guide to help you organize, share and sync React components between your team’s apps.

blog.

bitsrc.

io11 Javascript Machine Learning Libraries for 2019Awesome Machine Learning Libraries to add some AI in your next app!blog.

bitsrc.

io11 Javascript Utility Libraries you Should Know in 201911 Useful Javascript utility libraries to speed your development.

blog.

bitsrc.

io.