Friday, February 17, 2017

High-performance ES2015 and beyond

Over the last couple of months the V8 team focused on bringing the performance of newly added ES2015 and other even more recent JavaScript features on par with their transpiled ES5 counterparts.

Motivation

Before we go into the details of the various improvements, we should first consider why performance of ES2015+ features matter despite the widespread usage of Babel in modern web development:
  1. First of all there are new ES2015 features that are only polyfilled on demand, for example the Object.assign builtin. When Babel transpiles object spread properties (which are heavily used by many React and Redux applications), it relies on Object.assign instead of an ES5 equivalent if the VM supports it.
  2. Polyfilling ES2015 features typically increases code size, which contributes significantly to the current web performance crisis, especially on mobile devices common in emerging markets. So the cost of just delivering, parsing and compiling the code can be fairly high, even before you get to the actual execution cost.
  3. And last but not least, the client side JavaScript is only one of the environments that relies on the V8 engine. There’s also Node.js for server side applications and tools, where developers don’t need to transpile to ES5 code, but can directly use the features supported by the relevant V8 version in the target Node.js release.
Let’s consider the following code snippet from the Redux documentation:


function todoApp(state = initialState, action) {
  switch (action.type) {
    case SET_VISIBILITY_FILTER:
      return { ...state, visibilityFilter: action.filter }
    default:
      return state
  }
}

There are two things in that code that demand transpilation: the default parameter for state and the spreading of state into the object literal. Babel generates the following ES5 code:


"use strict";

var _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; };

function todoApp() {
  var state = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : initialState;
  var action = arguments[1];

  switch (action.type) {
    case SET_VISIBILITY_FILTER:
      return _extends({}, state, { visibilityFilter: action.filter });
    default:
      return state;
  }
}

Now imagine that Object.assign is orders of magnitude slower than the polyfilled _extends generated by Babel. In that case upgrading from a browser that doesn’t support Object.assign to an ES2015 capable version of the browser would be a serious performance regression and probably hinder adoption of ES2015 in the wild.

This example also highlights another important drawback of transpilation: The generated code that is shipped to the user is usually considerably bigger than the ES2015+ code that the developer initially wrote. In the example above, the original code is 203 characters (176 bytes gzipped) whereas the generated code is 588 characters (367 bytes gzipped). That’s already a factor of two increase in size. Let’s look at another example from the Async Iterators for JavaScript proposal:


async function* readLines(path) {
  let file = await fileOpen(path);

  try {
    while (!file.EOF) {
      yield await file.readLine();
    }
  } finally {
    await file.close();
  }
}

Babel translates these 187 characters (150 bytes gzipped) into a whopping 2987 characters (971 bytes gzipped) of ES5 code, not even counting the regenerator runtime that is required as an additional dependency:


"use strict";

var _asyncGenerator = function () { function AwaitValue(value) { this.value = value; } function AsyncGenerator(gen) { var front, back; function send(key, arg) { return new Promise(function (resolve, reject) { var request = { key: key, arg: arg, resolve: resolve, reject: reject, next: null }; if (back) { back = back.next = request; } else { front = back = request; resume(key, arg); } }); } function resume(key, arg) { try { var result = gen[key](arg); var value = result.value; if (value instanceof AwaitValue) { Promise.resolve(value.value).then(function (arg) { resume("next", arg); }, function (arg) { resume("throw", arg); }); } else { settle(result.done ? "return" : "normal", result.value); } } catch (err) { settle("throw", err); } } function settle(type, value) { switch (type) { case "return": front.resolve({ value: value, done: true }); break; case "throw": front.reject(value); break; default: front.resolve({ value: value, done: false }); break; } front = front.next; if (front) { resume(front.key, front.arg); } else { back = null; } } this._invoke = send; if (typeof gen.return !== "function") { this.return = undefined; } } if (typeof Symbol === "function" && Symbol.asyncIterator) { AsyncGenerator.prototype[Symbol.asyncIterator] = function () { return this; }; } AsyncGenerator.prototype.next = function (arg) { return this._invoke("next", arg); }; AsyncGenerator.prototype.throw = function (arg) { return this._invoke("throw", arg); }; AsyncGenerator.prototype.return = function (arg) { return this._invoke("return", arg); }; return { wrap: function wrap(fn) { return function () { return new AsyncGenerator(fn.apply(this, arguments)); }; }, await: function await(value) { return new AwaitValue(value); } }; }();

var readLines = function () {
  var _ref = _asyncGenerator.wrap(regeneratorRuntime.mark(function _callee(path) {
    var file;
    return regeneratorRuntime.wrap(function _callee$(_context) {
      while (1) {
        switch (_context.prev = _context.next) {
          case 0:
            _context.next = 2;
            return _asyncGenerator.await(fileOpen(path));

          case 2:
            file = _context.sent;
            _context.prev = 3;

          case 4:
            if (file.EOF) {
              _context.next = 11;
              break;
            }

            _context.next = 7;
            return _asyncGenerator.await(file.readLine());

          case 7:
            _context.next = 9;
            return _context.sent;

          case 9:
            _context.next = 4;
            break;

          case 11:
            _context.prev = 11;
            _context.next = 14;
            return _asyncGenerator.await(file.close());

          case 14:
            return _context.finish(11);

          case 15:
          case "end":
            return _context.stop();
        }
      }
    }, _callee, this, [[3,, 11, 15]]);
  }));

  return function readLines(_x) {
    return _ref.apply(this, arguments);
  };
}();

This is a 650% increase in size (the generic _asyncGenerator function might be shareable depending on how you bundle your code, so you can amortize some of that cost across multiple uses of async iterators). We don’t think it’s viable to ship only code transpiled to ES5 long-term, as the increase in size will not only affect download time/cost, but will also add additional overhead to parsing and compilation. If we really want to drastically improve page load and snappiness of modern web applications, especially on mobile devices, we have to encourage developers to not only use ES2015+ when writing code, but also to ship that instead of transpiling to ES5. Only deliver fully transpiled bundles to legacy browsers that don’t support ES2015. For VM implementors, this vision means we need to support ES2015+ features natively and provide reasonable performance.

Measurement methodology

As described above, absolute performance of ES2015+ features is not really an issue at this point. Instead the highest priority currently is to ensure that performance of ES2015+ features is on par with their naive ES5 and even more importantly, with the version generated by Babel. Conveniently there was already a project called six-speed by Kevin Decker, that accomplishes more or less exactly what we needed: a performance comparison of ES2015 features vs. naive ES5 vs. code generated by transpilers.

Six-Speed benchmark


So we decided to take that as the basis for our initial ES2015+ performance work. We forked it and added a couple of benchmarks. We focused on the most serious regressions first, i.e. line items where slowdown from naive ES5 to recommended ES2015+ version was above 2x, because our fundamental assumption is that the naive ES5 version will be at least as fast as the somewhat spec-compliant version that Babel generates.

A modern architecture for a modern language

In the past V8’s had difficulties optimizing the kind of language features that are found in ES2015+. For example, it never became feasible to add exception handling (i.e. try/catch/finally) support to Crankshaft, V8’s classic optimizing compiler. This meant V8’s ability to optimize an ES6 feature like for...of, which essentially has an implicit finally clause, was limited. Crankshaft’s limitations and the overall complexity of adding new language features to full-codegen, V8’s baseline compiler, made it inherently difficult to ensure new ES features were added and optimized in V8 as quickly as they were standardized.

Fortunately, Ignition and TurboFan (V8’s new interpreter and compiler pipeline), were designed to support the entire JavaScript language from the beginning, including advanced control flow, exception handling, and most recently for...of and destructuring from ES2015. The tight integration of the architecture of Ignition and TurboFan make it possible to quickly add new features and to optimize them fast and incrementally.

Many of the improvements we achieved for modern language features were only feasible with the new Ignition/Turbofan pipeline. Ignition and TurboFan proved especially critical to optimizing generators and async functions. Generators had long been supported by V8, but were not optimizable due to control flow limitations in Crankshaft. Async functions are essentially sugar on top of generators, so they fall into the same category. The new compiler pipeline leverages Ignition to make sense of the AST and generate bytecodes which de-sugar complex generator control flow into simpler local-control flow bytecodes. TurboFan can more easily optimize the resulting bytecodes since it doesn’t need to know anything specific about generator control flow, just how to save and restore a function’s state on yields.

How JavaScript generators are represented in Ignition and TurboFan

State of the union

Our short-term goal was to reach less than 2x slowdown on average as soon as possible. We started by looking at the worst test first, and from Chrome M54 to Chrome M58 (Canary) we managed to reduce the number of tests with slowdown above 2x from 16 to 8, and at the same time reduce the worst slowdown from 19x in M54 to just 6x in M58 (Canary). We also significantly reduced the average and median slowdown during that period:


You can see a clear trend towards parity of ES2015+ and ES5. On average we improved performance relative to ES5 by over 47%. Here are some highlights that we addressed since M54.


Most notably we improved performance of new language constructs that are based on iteration, like the spread operator, destructuring and for...of loops. For example, using array destructuring


function fn() {
  var [c] = data;
  return c;
}

is now as fast as the naive ES5 version


function fn() {
  var c = data[0];
  return c;
}

and a lot faster (and shorter) than the Babel generated code:


"use strict";

var _slicedToArray = function () { function sliceIterator(arr, i) { var _arr = []; var _n = true; var _d = false; var _e = undefined; try { for (var _i = arr[Symbol.iterator](), _s; !(_n = (_s = _i.next()).done); _n = true) { _arr.push(_s.value); if (i && _arr.length === i) break; } } catch (err) { _d = true; _e = err; } finally { try { if (!_n && _i["return"]) _i["return"](); } finally { if (_d) throw _e; } } return _arr; } return function (arr, i) { if (Array.isArray(arr)) { return arr; } else if (Symbol.iterator in Object(arr)) { return sliceIterator(arr, i); } else { throw new TypeError("Invalid attempt to destructure non-iterable instance"); } }; }();

function fn() {
  var _data = data,
      _data2 = _slicedToArray(_data, 1),
      c = _data2[0];

  return c;
}

You can check out the High-Speed ES2015 talk we gave at the last Munich NodeJS User Group meetup for additional details:



We are committed to continue improving the performance of ES2015+ features. In case you are interested in the nitty-gritty details please have a look at V8's ES2015 and beyond performance plan.

Posted by Benedikt Meurer @bmeurer, EcmaScript Performance Engineer

Tuesday, February 14, 2017

Help us test the future of V8!

The V8 team is currently working on a new default compiler pipeline that will help us bring future speedups to real-world JavaScript. You can preview the new pipeline in Chrome Canary today to help us verify that there are no surprises when we roll out the new configuration for all Chrome channels.

The new compiler pipeline uses the Ignition interpreter and Turbofan compiler to execute all JavaScript (in place of the classic pipeline which consisted of the FullCodegen and Crankshaft compilers). A random subset of Chrome Canary and Chrome Developer channel users are already testing the new configuration. However, anyone can opt-in to the new pipeline (or revert to the old one) by flipping a flag in about:flags.

You can help test the new pipeline by opting-in and using it with Chrome on your favorite web sites. If you are a web developer, please test your web applications with the new compiler pipeline. If you notice a regression in stability, correctness, or performance, please report the issue to the V8 bug tracker.

How to enable the new pipeline

  1. Install the latest Canary
  2. Open URL "about:flags" in Chrome
  3. Search for "Experimental JavaScript Compilation Pipeline" and set it to "Enabled"

How to report problems

Please let us know if your browsing experience changes significantly when using the new pipeline over the default pipeline. If you are a web developer, please test the performance of the new pipeline on your (mobile) web application to see how it is affected. If you discover that your web application is behaving strange (or tests are failing), please let us know:
  1. Ensure that you have correctly enabled the new pipeline as outlined in the previous section.
  2. Create a bug on V8's bug tracker.
  3. Attach sample code which we can use to reproduce the problem.
Posted by Daniel Clifford @expatdanno, Original Munich V8 Brewer

Thursday, February 9, 2017

One small step for Chrome, one giant heap for V8

V8 has a hard limit on its heap size. This serves as a safeguard against applications with memory leaks. When an application reaches this hard limit, V8 does a series of last resort garbage collections. If the garbage collections do not help to free memory V8 stops execution and reports an out-of-memory failure. Without the hard limit a memory leaking application could use up all system memory hurting the performance of other applications.

Ironically, this safeguard mechanism makes investigation of memory leaks harder for JavaScript developers. The application can run out of memory before the developer manages to inspect the heap in DevTools. Moreover the DevTools process itself can run out memory because it uses an ordinary V8 instance. For example, taking a heap snapshot of this demo will abort execution due to out-of-memory on the current stable Chrome.

Historically the V8 heap limit was conveniently set to fit the signed 32-bit integer range with some margin. Over time this convenience lead to sloppy code in V8 that mixed types of different bit widths, effectively breaking the ability to increase the limit. Recently we cleaned up the garbage collector code, enabling the use of larger heap sizes. DevTools already makes use of this feature and taking a heap snapshot in the previously mentioned demo works as expected in the latest Chrome Canary.

We also added a feature in DevTools to pause the application when it is close to running out of memory. This feature is useful to investigate bugs that cause the application to allocate a lot of memory in a short period of time. When running this demo with the latest Chrome Canary, DevTools pauses the application before the out-of-memory failure and increases the heap limit, giving the user a chance to inspect the heap, evaluate expressions on the console to free memory and then resume execution for further debugging.


V8 embedders can increase the heap limit using the set_max_old_space_size function of the ResourceConstraints API. But watch out, some phases in the garbage collector have a linear dependency on the heap size. Garbage collection pauses may increase with larger heaps.

Posted by guardians of heap Ulan Degenbaev, Hannes Payer, Michael Lippautz and DevTools master Alexey Kozyatinskiy.

Monday, February 6, 2017

V8 Release 5.7


Every six weeks, we create a new branch of V8 as part of our release process. Each version is branched from V8’s git master immediately before a Chrome Beta milestone. Today we’re pleased to announce our newest branch, V8 version 5.7, which will be in beta until it is released in coordination with Chrome 57 Stable in several weeks. V8 5.7 is filled will all sorts of developer-facing goodies. We’d like to give you a preview of some of the highlights in anticipation of the release.

Performance improvements

Native async functions as fast as promises

Async functions are now approximately as fast as the same code written with promises. The execution performance of async functions quadrupled according to our microbenchmarks. During the same period, overall promise performance also doubled.

Async performance improvements in V8 on Linux x64

Continued ES2015 improvements

V8 continues to make ES2015 language features faster so that developers use new features without incurring performance costs. The spread operator, destructuring and generators are now approximately as fast as their naive ES5 equivalents.

RegExp 15 % faster

Migrating RegExp functions from a self-hosted JavaScript implementation to one that hooks into TurboFan’s code generation architecture has yielded ~15 % faster overall RegExp performance. More details can be found in the dedicated blog post.

New library features

Several recent additions to the ECMAScript standard library are included in this release. Two String methods, padStart and padEnd, provide helpful string formatting features, while Intl.DateTimeFormat.prototype.formatToParts gives authors the ability to customize their date/time formatting in a locale-aware manner.

WebAssembly enabled

Chrome 57 (which includes V8 5.7) will be the first release to enable WebAssembly by default. For more details, see the getting started documents on webassembly.org and the API documentation on MDN

V8 API additions

Please check out our summary of API changes. This document is regularly updated a few weeks after each major release.
Developers with an active V8 checkout can use 'git checkout -b 5.7 -t branch-heads/5.7' to experiment with the new features in V8 5.7. Alternatively you can subscribe to Chrome's Beta channel and try the new features out yourself soon.

PromiseHook

This C++ API allows users to implement profiling code that traces through the lifecycle of promises. This enables Node’s upcoming AsyncHook API which lets you build async context propagation.

The PromiseHook API provides four lifecycle hooks - init, resolve, before and after -- init hook is run when a new promise is created, the resolve hook is run when a promise is resolved, the pre & post hooks are run right before and after a PromiseReactionJob. For more information please checkout the tracking issue and design document.

Posted by the V8 team

Tuesday, January 10, 2017

Speeding up V8 Regular Expressions

This blog post covers V8's recent migration of RegExp's built-in functions from a self-hosted JavaScript implementation to one that hooks straight into our new code generation architecture based on TurboFan.

V8’s RegExp implementation is built on top of Irregexp, which is widely considered to be one of the fastest RegExp engines. While the engine itself encapsulates the low-level logic to perform pattern matching against strings, functions on the RegExp prototype such as RegExp.prototype.exec do the additional work required to expose its functionality to the user.

Historically, various components of V8 have been implemented in JavaScript. Until recently, regexp.js has been one of them, hosting the implementation of the RegExp constructor, all of its properties as well as its prototype’s properties.

Unfortunately this approach has disadvantages, including unpredictable performance and expensive transitions to the C++ runtime for low-level functionality. The recent addition of built-in subclassing in ES6 (allowing JavaScript developers to provide their own customized RegExp implementation) has resulted in a further RegExp performance penalty, even if the RegExp built-in is not subclassed. These regressions could not be be fully addressed in the self-hosted JavaScript implementation.

We therefore decided to migrate the RegExp implementation away from JavaScript.  However, preserving performance turned out to be more difficult than expected. An initial migration to a full C++ implementation was significantly slower, reaching only around 70% of the original implementation’s performance.  After some investigation, we found several causes:

  • RegExp.prototype.exec contains a couple of extremely performance-sensitive areas, most notably including the transition to the underlying RegExp engine, and construction of the RegExp result with its associated substring calls. For these, the JavaScript implementation relied on highly optimized pieces of code called 'stubs', written either in native assembly language or by hooking directly into the optimizing compiler pipeline. It is not possible to access these stubs from C++, and their runtime equivalents are significantly slower.
  • Accesses to properties such as RegExp's lastIndex can be expensive, possibly requiring lookups by name and traversal of the prototype chain. V8's optimizing compiler can often automatically replace such accesses with more efficient operations, while these cases would need to be handled explicitly in C++.
  • In C++, references to JavaScript objects must be wrapped in so-called Handles in order to cooperate with garbage collection. Handle management produces further overhead in comparison to the plain JavaScript implementation.

Our new design for the RegExp migration is based on the CodeStubAssembler, a mechanism that allows V8 developers to write platform-independent code which will later be translated into fast, platform-specific code by the same backend that is also used for the new optimizing compiler TurboFan. Using the CodeStubAssembler allows us to address all shortcomings of the initial C++ implementation. Stubs (such as the entry-point into the RegExp engine) can easily be called from the CodeStubAssembler. While fast property accesses still need to be explicitly implemented on so-called fast paths, such accesses are extremely efficient in the CodeStubAssembler. Handles simply do not exist outside of C++. And since the implementation now operates at a very low level, we can take further shortcuts such as skipping expensive result construction when it is not needed.

Results have been very positive. Our score on a substantial RegExp workload has improved by 15%, more than regaining our recent subclassing-related performance losses. Microbenchmarks (Figure 1) show improvements across the board, from 7% for RegExp.prototype.exec, up to 102% for RegExp.prototype[@@split].

Figure 1: RegExp speedup broken down by function.
So how can you, as a JavaScript developer, ensure that your RegExps are fast? If you are not interested in hooking into RegExp internals, make sure that neither the RegExp instance, nor its prototype is modified in order to get the best performance:
var re = /./g;
re.exec('');  // Fast path.
re.new_property = 'slow';
RegExp.prototype.new_property = 'also slow';
re.exec('');  // Slow path.
And while RegExp subclassing may be quite useful at times, be aware that subclassed RegExp instances require more generic handling and thus take the slow path:
class SlowRegExp extends RegExp {}
new SlowRegExp(".", "g").exec('');  // Slow path.
The full RegExp migration will be available in V8 5.7.

Posted by Jakob Gruber, Regular Software Engineer

Wednesday, December 21, 2016

How V8 measures real-world performance

Over the last year the V8 team has developed a new methodology to measure and understand real-world JavaScript performance. We’ve used the insights that we gleaned from it to change how the V8 team makes JavaScript faster. Our new real-world focus represents a significant shift from our traditional performance focus. We’re confident that as we continue to apply this methodology in 2017, it will significantly improve users’ and developers’ ability to rely on predictable performance from V8 for real-world JavaScript in both Chrome and Node.js.

The old adage “what gets measured gets improved” is particularly true in the world of JavaScript virtual machine (VM) development. Choosing the right metrics to guide performance optimization is one of the most important things a VM team can do over time. The following timeline roughly illustrates how JavaScript benchmarking has evolved since the initial release of V8:

Evolution of JavaScript benchmarks.

Historically, V8 and other JavaScript engines have measured performance using synthetic benchmarks. Initially, VM developers used microbenchmarks like SunSpider and Kraken. As the browser market matured a second benchmarking era began, during which they used larger but nevertheless synthetic test suites such as Octane and JetStream.

Microbenchmarks and static test suites have a few benefits: they’re easy to bootstrap, simple to understand, and able to run in any browser, making comparative analysis easy. But this convenience comes with a number of downsides. Because they include a limited number of test cases, it is difficult to design benchmarks which accurately reflect the characteristics of the web at large. Moreover, benchmarks are usually updated infrequently; thus, they tend to have a hard time keeping up with new trends and patterns of JavaScript development in the wild. Finally, over the years VM authors explored every nook and cranny of the traditional benchmarks, and in the process they discovered and took advantage of opportunities to improve benchmark scores by shuffling around or even skipping externally unobservable work during benchmark execution. This kind of benchmark-score-driven improvement and over-optimizing for benchmarks doesn’t always provide much user- or developer-facing benefit, and history has shown that over the long-term it’s very difficult to make an “ungameable” synthetic benchmark.

Measuring real websites: WebPageReplay & Runtime Call Stats

Given an intuition that we were only seeing one part of the performance story with traditional static benchmarks, the V8 team set out to measure real-world performance by benchmarking the loading of actual websites. We wanted to measure use cases that reflected how end users actually browsed the web, so we decided to derive performance metrics from websites like Twitter, Facebook, and Google Maps. Using a piece of Chrome infrastructure called WebPageReplay we were able to record and replay page loads deterministically.

In tandem, we developed a tool called Runtime Call Stats which allowed us to profile how different JavaScript code stressed different V8 components. For the first time, we had the ability not only to test V8 changes easily against real websites, but to fully understand how and why V8 performed differently under different workloads.

We now monitor changes against a test suite of approximately 25 websites in order to guide V8 optimization. In addition to the aforementioned websites and others from the Alexa Top 100, we selected sites which were implemented using common frameworks (React, Polymer, Angular, Ember, and more), sites from a variety of different geographic locales, and sites or libraries whose development teams have collaborated with us, such as Wikipedia, Reddit, Twitter, and Webpack. We believe these 25 sites are representative of the web at large and that performance improvements to these sites will be directly reflected in similar speedups for sites being written today by JavaScript developers.

For an in-depth presentation about the development of our test suite of websites and Runtime Call Stats, see the BlinkOn 6 presentation on real-world performance. You can even run the Runtime Call Stats tool yourself.


Making a real difference

Analyzing these new, real-world performance metrics and comparing them to traditional benchmarks with Runtime Call Stats has also given us more insight into how various workloads stress V8 in different ways.

From these measurements, we discovered that Octane performance was actually a poor proxy for performance on the majority of our 25 tested websites. You can see in the chart below: Octane’s color bar distribution is very different than any other workload, especially those for the real-world websites. When running Octane, V8’s bottleneck is often the execution of JavaScript code. However, most real-world websites instead stress V8’s parser and compiler. We realized that optimizations made for Octane often lacked impact on real-world web pages, and in some cases these optimizations made real-world websites slower.

Distribution of time running all of Octane, running the line-items of Speedometer and loading websites from our test suite on Chrome M57.

We also discovered that another benchmark was actually a better proxy for real websites. Speedometer, a WebKit benchmark that includes applications written in React, Angular, Ember, and other frameworks, demonstrated a very similar runtime profile to the 25 sites. Although no benchmark matches the fidelity of real web pages, we believe Speedometer does a better job of approximating the real-world workloads of modern JavaScript on the web than Octane.

Bottom line: A faster V8 for all

Over the course of the past year, the real-world website test suite and our Runtime Call Stats tool has allowed us to deliver V8 performance optimizations that speed up page loads across the board by an average of 10-20%. Given the historical focus on optimizing page load across Chrome, a double-digit improvement to the metric in 2016 is a significant achievement. The same optimizations also improved our score on Speedometer by 20-30%.

These performance improvements should be reflected in other sites written by web developers using modern frameworks and similar patterns of JavaScript. Our improvements to builtins such as Object.create and Function.prototype.bind, optimizations around the object factory pattern, work on V8’s inline caches, and ongoing parser improvements are intended to be generally applicable improvements to underlooked areas of JavaScript used by all developers, not just the representative sites we track.

We plan to expand our usage of real websites to guide V8 performance work. Stay tuned for more insights about benchmarks and script performance.

Posted by the V8 team

Thursday, December 15, 2016

V8 ❤️ Node.js

Node's popularity has been growing steadily over the last few years, and we have been working to make Node better. This blog post highlights some of the recent efforts in V8 and DevTools.


Debug Node.js in DevTools

You can now debug Node applications using the Chrome developer tools. The Chrome DevTools Team moved the source code that implements the debugging protocol from Chromium to V8, thereby making it easier for Node Core to stay up to date with the debugger sources and dependencies. Other browser vendors and IDEs use the Chrome debugging protocol as well, collectively improving the developer experience when working with Node.

ES6 Speed-ups

We are working hard on making V8 faster than ever. A lot of our recent performance work centers around ES6 features, including promises, generators, destructors, and rest/spread operators. Because the versions of V8 in Node 6.2 and onwards fully support ES6, Node developers can use new language features "natively", without polyfills. This means that Node developers are often the first to benefit from ES6 performance improvements. Similarly, they are often the first to recognize performance regressions. Thanks to an attentive Node community, we discovered and fixed a number of regressions, including performance issues with instanceof, buffer.length, long argument lists, and let/const.

Fixes for Node.js vm module and REPL coming

The vm module has had some long standing limitations. In order to address these issues properly, we have extended the V8 API to implement more intuitive behavior. We are excited to announce that the vm module improvements are one of the projects we’re supporting as mentors in Outreachy for the Node Foundation. We hope to see additional progress on this project and others in the near future.

Async/await

With async functions, you can drastically simplify asynchronous code by rewriting program flow by awaiting promises sequentially. Async/await will land in Node with the next V8 update. Our recent work on improving the performance of promises and generators has helped make async functions fast. On a related note, we are also working on providing promise hooks, a set of introspection APIs needed for the Node AsyncHook API.

Want to try Bleeding Edge Node.js?

If you’re excited to test the newest V8 features in Node and don’t mind using bleeding edge, unstable software, you can try out our integration branch here. V8 is continuously integrated into Node before V8 hits Node master, so we can catch issues early. Be warned though, this is more experimental than Node master.

Posted by Franziska Hinkelmann, Node Monkey Patcher