1

I cannot implement inference with TensorRT context.execute_async_v3(...). There are many examples using context.execute_async_v2(…). However, v2 is now deprecated.

The TensorRT developer page says to: Specify buffers for inputs and outputs with "context.set_tensor_address(name, ptr)"

The API has "context.set_input_shape("name, tuple(input_batch.shape))" and "set_output_allocator()", but after days of mucking around I have got nowhere.

Can some please provide an example or suggestion.

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.