[#]: collector: (lujun9972) [#]: translator: ( ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (The cost of JavaScript in 2019 · V8) [#]: via: (https://v8.dev/blog/cost-of-javascript-2019) [#]: author: (Addy Osmani https://twitter.com/addyosmani) The cost of JavaScript in 2019 · V8 ====== **Note:** If you prefer watching a presentation over reading articles, then enjoy the video below! If not, skip the video and read on. [“The cost of JavaScript”][1] as presented by Addy Osmani at #PerfMatters Conference 2019. One large change to [the cost of JavaScript][2] over the last few years has been an improvement in how fast browsers can parse and compile script. **In 2019, the dominant costs of processing scripts are now download and CPU execution time.** User interaction can be delayed if the browser’s main thread is busy executing JavaScript, so optimizing bottlenecks with script execution time and network can be impactful. ### Actionable high-level guidance # What does this mean for web developers? Parse & compile costs are **no longer as slow** as we once thought. The three things to focus on for JavaScript bundles are: * **Improve download time** * Keep your JavaScript bundles small, especially for mobile devices. Small bundles improve download speeds, lower memory usage, and reduce CPU costs. * Avoid having just a single large bundle; if a bundle exceeds ~50–100 kB, split it up into separate smaller bundles. (With HTTP/2 multiplexing, multiple request and response messages can be in flight at the same time, reducing the overhead of additional requests.) * On mobile you’ll want to ship much less especially because of network speeds, but also to keep plain memory usage low. * **Improve execution time** * Avoid [Long Tasks][3] that can keep the main thread busy and can push out how soon pages are interactive. Post-download, script execution time is now a dominant cost. * **Avoid large inline scripts** (as they’re still parsed and compiled on the main thread). A good rule of thumb is: if the script is over 1 kB, avoid inlining it (also because 1 kB is when [code caching][4] kicks in for external scripts). ### Why does download and execution time matter? # Why is it important to optimize download and execution times? Download times are critical for low-end networks. Despite the growth in 4G (and even 5G) across the world, our [effective connection types][5] remain inconsistent with many of us running into speeds that feel like 3G (or worse) when we’re on the go. JavaScript execution time is important for phones with slow CPUs. Due to differences in CPU, GPU, and thermal throttling, there are huge disparities between the performance of high-end and low-end phones. This matters for the performance of JavaScript, as execution is CPU-bound. In fact, of the total time a page spends loading in a browser like Chrome, anywhere up to 30% of that time can be spent in JavaScript execution. Below is a page load from a site with a pretty typical workload (Reddit.com) on a high-end desktop machine:![][6]JavaScript processing represents 10–30% of time spent in V8 during page load. On mobile, it takes 3–4× longer for a median phone (Moto G4) to execute Reddit’s JavaScript compared to a high-end device (Pixel 3), and over 6× as long on a low-end device (the <$100 Alcatel 1X):![][7]The cost of Reddit’s JavaScript across a few different device classes (low-end, average, and high-end) **Note:** Reddit has different experiences for desktop and mobile web, and so the MacBook Pro results cannot be compared to the other results. When you’re trying to optimize JavaScript execution time, keep an eye out for [Long Tasks][8] that might be monopolizing the UI thread for long periods of time. These can block critical tasks from executing even if the page looks visually ready. Break these up into smaller tasks. By splitting up your code and prioritizing the order in which it is loaded, you can get pages interactive faster and hopefully have lower input latency.![][9]Long tasks monopolize the main thread. You should break them up. ### What has V8 done to improve parse/compile? # Raw JavaScript parsing speed in V8 has increased 2× since Chrome 60. At the same time, raw parse (and compile) cost has become less visible/important due to other optimization work in Chrome that parallelizes it. V8 has reduced the amount of parsing and compilation work on the main thread by an average of 40% (e.g. 46% on Facebook, 62% on Pinterest) with the highest improvement being 81% (YouTube), by parsing and compiling on a worker thread. This is in addition to the existing off-main-thread streaming parse/compile.![][10]V8 parse times across different versions We can also visualize the CPU time impact of these changes across different versions of V8 across Chrome releases. In the same amount of time it took Chrome 61 to parse Facebook’s JS, Chrome 75 can now parse both Facebook’s JS AND 6 times Twitter’s JS.![][11]In the time it took Chrome 61 to parse Facebook’s JS, Chrome 75 can now parse both Facebook’s JS and 6 times Twitter’s JS. Let’s dive into how these changes were unlocked. In short, script resources can be streaming-parsed and-compiled on a worker thread, meaning: * V8 can parse+compile JavaScript without blocking the main thread. * Streaming starts once the full HTML parser encounters a `