After training we can optimize a frozen graph or even a dynamic graph by removes training-specific and debug-specific nodes, fusing common operations, and removes code that isn’t used/reached.
Code Example
from tensorflow.python.tools import optimize_for_inference_lib inputGraph = tf.GraphDef() #read in a frozen model with tf.gfile.Open('frozentensorflowModel.pb', "rb") as f: data2read = f.read() inputGraph.ParseFromString(data2read) outputGraph = optimize_for_inference_lib.optimize_for_inference(inputGraph, ["inputTensor"], ["output/softmax"], tf.int32.as_datatype_enum) f = tf.gfile.FastGFile('OptimizedGraph.pb', "w") f.write(outputGraph.SerializeToString())
Comments are closed, but trackbacks and pingbacks are open.