Tensorflow

After training we can optimize a frozen graph or even a dynamic graph by removes training-specific and debug-specific nodes, fusing common operations, and removes code that isn’t used/reached. Code Example from tensorflow.python.tools import optimize_for_inference_lib inputGraph = tf.GraphDef() #read in a frozen model with tf.gfile.Open(‘frozentensorflowModel.pb’, “rb”) as f: data2read = f.read() inputGraph.ParseFromString(data2read) outputGraph = optimize_for_inference_lib.optimize_for_inference(inputGraph, [“inputTensor”],        Continue Reading

Hidden states are the unknowns we try to detect or predict. The Hidden states have a relationship amongst themselves called the transition probabilities. Observations are the evidence variables that we have a priori. Observations and states have a relationship between them called the emission probabilities.Continue Reading

Gradient is a vector that is made up from a the derivatives of multi-variate function. Gradient of an Image = ∇f = [∂f/∂x,∂f/∂y] The gradient of a function is the vector direction/angle of the most rapid increase in Intensity(Getting Brighter), and the magnitude of that vector is how much itContinue Reading