Any difference in inference speed between using torchscript models and onnx models?

Hi all,

I was wondering if anyone have compared inference speed between torchscript and onnx in OpenSearch ml-common APIs and neural-search ingest pipeline?

I saw this Unleashing the Power of ONNX for Speedier SBERT Inference | by Swaraj Patil | Medium and I hope the result is also applicable to OpenSearch implementation.

Regards