baseline
javascript backend: 0.012 seconds
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7yQJMBJ27aaTDOe1utAm7xXaz8LV5408LB_tBlXTZDrWW-d6jvWuDvEeTD5o4fv7dZN4hZNIutFf8ZbFaWKIafDl7zW-e5qLCtf3SYnSs21BRYarrVULxK0pSnngJyVXomgfhJufJVGRO/s400/richards-without-v8-natives.py.png)
with manual pre-jitting
javascript backend + JIT Natives: 0.0106 seconds
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzP3ZpIeweLHJbPmBcKqUbtAol-bLGUGbWPJTvsc5uZUpGqlr3FXQVtxBn2yZVwjTHFurnEyAYKeR0HOnq9DujzZDnqd5PiItJyq2qI1yFffAz8okXgieaVu2eRveKJvFVqkwZxwNuhG5-/s400/richards.py.png)
The Chrome V8 JIT is already optimized for the Richards benchmark, so its harder to make it run any faster. Using a V8 native JIT call to force a method to be optimized before the benchmark timer, it is possible to get a tiny 1/50th of a second more speed. It would be interesting if V8 exposed other native calls that could improve performance. This is enabled with the command line option `--v8-natives`, and the syntax `v8->(F(args))` where `F(args)` is a function or method called with dummy arguments that will warm up the function so V8 can infer its static types.