How fast? #11
-
How quickly does inference run on both CPU/GPU? |
Beta Was this translation helpful? Give feedback.
Answered by
Nicholasli1995
Oct 22, 2021
Replies: 1 comment
-
CPU is not supported and tested. Per the current parameters, a forward pass takes 60-90ms on a 12GB Titan Xp and 23 instances can be processed in parallel. On average, KITTI val set contains 4.2 cars/image (<23), resulting in 60-90ms/image. To further improve the efficiency, consider using smaller network/model compression/quantization. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
Nicholasli1995
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
CPU is not supported and tested.
Per the current parameters, a forward pass takes 60-90ms on a 12GB Titan Xp and 23 instances can be processed in parallel. On average, KITTI val set contains 4.2 cars/image (<23), resulting in 60-90ms/image.
To further improve the efficiency, consider using smaller network/model compression/quantization.