• Home
  • History
  • Annotate
  • only in /external/tensorflow/tensorflow/contrib/tensorrt/
NameDateSize

..10-Aug-20184 KiB

__init__.py10-Aug-20181,007

BUILD10-Aug-20186 KiB

convert/10-Aug-20184 KiB

kernels/10-Aug-20184 KiB

log/10-Aug-20184 KiB

ops/10-Aug-20184 KiB

python/10-Aug-20184 KiB

README.md10-Aug-20181.4 KiB

segment/10-Aug-20184 KiB

shape_fn/10-Aug-20184 KiB

tensorrt_test.cc10-Aug-20185 KiB

test/10-Aug-20184 KiB

trt_conversion.i10-Aug-20184.3 KiB

README.md

1Using TensorRT in TensorFlow
2============================
3
4This module provides necessary bindings and introduces TRT_engine_op
5operator that wraps a subgraph in TensorRT.
6
7Compilation
8-----------
9
10In order to compile the module, you need to have a local TensorRT
11installation (libnvinfer.so and respective include files). During the
12configuration step, TensorRT should be enabled and installation path
13should be set. If installed through package managers (deb,rpm),
14configure script should find the necessary components from the system
15automatically. If installed from tar packages, user has to set path to
16location where the library is installed during configuration.
17
18
19```
20bazel build --config=cuda --config=opt //tensorflow/tools/pip_package:build_pip_package
21bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/
22```
23
24After the installation of tensorflow package, TensorRT transformation
25will be available. An example use is shown below.
26
27```python
28import tensorflow as tf
29import tensorflow.contrib.tensorrt as trt
30#... create and train or load model
31gdef = sess.graph.as_graph_def()
32trt_gdef = trt.create_inference_graph(
33    gdef, #original graph_def
34    ["output"], #name of output node(s)
35    max_batch_size, #maximum batch size to run the inference
36    max_workspace_size_bytes) # max memory for TensorRT to use
37tf.reset_default_graph()
38tf.import_graph_def(graph_def=trt_gdef)
39#...... run inference
40```
41