The next episode

Tue, 19 Apr 2016 21:38:06 GMT

After joining Mozilla Messaging just over seven years ago, I start a new job later this month. I am thankful for Alex's introduction to David that led to the Mozilla job.

I moved up to Vancouver to experiment with web-based messaging. That led to a few experiments at Mozilla, most recently the Firefox OS email app front end. However, all the experiments I worked on at Mozilla were eventually shut down. It is time for me to try working on a different kind of project.

I will be cheering on the efforts at Mozilla to improve the browser, and integrate web content better with native platforms via features like service workers and web manifests. I would like to see first class placement of web experiences on native platforms that work well with background updates, offline use. A Mozilla-infused Android distribution with web content front and center, some secure messaging and ties to local communities would be neat to see.

The web rendering developments in Servo are exciting too. Do you want to learn a neat language and help them out? Check out the first bugs list. The Rust and Servo communities are really great.

While I appreciate the value of platform work, it is more work for me than the flow I feel when building on web apps. So I am off to work on a web app that helps educators organize their classes. I still plan to do some light open source work on my own time, make sure I keep up maintenance releases for RequireJS related tools. But I will be busy with an exciting new job, and I will not have a lot of left over energy for a while. I will be staying in Vancouver.

Thank you Mozilla, and the people I worked with, for seven years. It changed my life. I recommend working at Mozilla particularly if you are a platform or systems developer. Or, just contributing in some way to support the great work of the Mozilla Foundation.

RequireJS 2.2, alameda 1.0 released

Thu, 17 Mar 2016 19:22:22 GMT

RequireJS has a new minor version rev, prompted by a couple things:

1) The RequireJS project has transitioned over to the jQuery Foundation. New legal stuff for the copyright, and now just MIT license vs both MIT and BSD 2 dual license. According to the lawyer advice, MIT on its own is more permissive than BSD 2, so it should be a superset of all the possible use cases when BSD 2 might be used in the consuming project. Same permissable, free use as before, just simpler on the license front.

If you want to contribute to the codebase, you can sign the jQuery Foundation CLA, which is generally useful. It clears the path for contributing to all of the other great jQuery Foundation projects.

2) Some larger changes for some new features. It should all be backwards compatible, but with the addition of the new features, it felt appropriate to indicate this was more than a patch version release.

This is not the same 2.2 release that I had in the planning phases for a couple of years, just a recalibration on existing needs, and wanting to better follow semver.

Notable RequireJS loader changes

urlArgs can be a function

This allows finer tuning of querystring arguments, for things like different version strings for different files.

plugin IDs for data-main

This sort of data-main value works now:

<script src="require.js" data-main="lib/bootstrap!app/main"></script>

This assumes the baseUrl is the directory that holds the HTML document, in order to find the loader plugin.

Notable r.js optimizer changes

Faster!

Thanks to the great work by @petersondrew, build times are noticeably faster. Your mileage will vary depending on the type of project, but the r.js test suite runs almost twice as fast under Node.

Uglify2 is now the default minifier

Before 2.2, optimize: 'uglify' used UglifyJS 1.3.x, where uglify2 used UglifyJS 2. This was a result of the optimizer just being around a long time, during the uglify transition to their version 2 codebase. Now that version 2 is the standard UglifyJS codebase, the use of version 1 as a default was confusing.

So version 1 was removed from the optimizer, and now just version 2 is used. No need to update your build config. If you use uglify2 as the optimize value, it will still work. Now the uglify value uses version 2 too.

Generate bundles config

If you want to use bundles config, the optimizer can now generate the bundle config and insert it into a requirejs.config() call via the bundlesConfigOutFile config option.

Alternatives

I am mostly trying to keep the RequireJS codebase fairly static, no big changes.

If you do want to use loader that has a more modern, smaller codebase, try alameda. That is the codebase I would like to maintain for an AMD loader going forward, and where I would consider larger structural changes.

If you want to do fancier custom build processes, look at amodro-trace. It provides a programmatic API for use in Node programs for just tracing and normalizing AMD modules.

Past and future

The primordial parts of the RequireJS loader started to take form in September 2009, with many usable releases through October 2011, when 1.0 was tagged. It is now 2016.

So, 6 & 1/2 years. I was hoping the need for these userland loaders would just last for a few years then a native loader and module system would obsolete the need for them. Funny how our desires for the future often outpace reality.

I am still hopeful that will happen at some point, so I am not investing a lot of energy to reimagining how an AMD loader might work. I would like to see a native module system actually ship before seeing if it made sense to make new tools around it.

That said, I like the smaller, modern core of alameda. I look forward to using it if a project calls for an AMD loader, although I am happy to work on projects that would not require it.

I want to keep the RequireJS codebase fairly stable. I still expect to do occasional patch version releases for RequireJS, maybe 2-3 a year. Particularly since alameda has a higher barrier to use: promise support in the browser and IE10+ for script loading. As time moves forward though, those constraints should be less of a concern.

I will not post here about every point release for the loaders. These places track the releases:

Thanks for your time and use of the libraries. The goal for them was to be useful, help inform the future. They have lived longer than I would have wagered at the beginning. Here's to hoping they are not around for another 6 & 1/2 years! Please hasten their retirement by finishing a native JS module system!

Template strings, components, models and events

Wed, 20 Jan 2016 06:26:35 GMT

This is a shorter, rougher post to outline a recent experiment around UI construction. It is a bit high level, and sets some context around the htemplate module.

Some nice things about the React-related world that I wanted to try for a work project:

  1. A component system.
  2. Grouping the DOM building with the JS code that knows about the state (In React, this is the render method, using React.createElement or JSX).
  3. From Flux, enforcing visual updates to be triggered from model changes, not from other visual components.

However, because of cultural constraints and an interest to keep the up front cost small, keep the stack small and shallow, I wanted to avoid a virtual DOM implementation and a transpiler.

What I ended up using:

  1. Custom elements. This makes sense for the work project since we want to test out the custom element API, help find bugs. Custom elements are enabled by default for the project, so no special shims or adapters are necessary.

  2. htemplate: uses tagged template strings to allow grouping the state logic in JS with building up the DOM. I get to use ES2015 without needing any build transforms to translate it to ES5, and I wanted to avoid the JSX transforms.

  3. Adopt a cultural practice of calling the model API, then wait for the model object to update and bind the re-rendering of the custom element to events emitted from the model. The custom elements directly subscribe to the model to get changes instead of passing state down through components via a top level component.

In order to not take the cost of re-rending stuff that has not changed, the UI would be broken down into smaller custom elements that listened for fine-grained model updates.

This is possible because we have a front end object, called model which sits in front of the core backend model API, and if we notice that we need to group or constraint model update events to help scope visual updates, we have a place to do that.

A similar idea is behind GraphQL and Falcor, but this approach was done without a formal query language: construct a way to only see part of the whole model, scope data update events to subsets. Model properties/event names were used as the scoping mechanism.

Custom element construction

This can be done in any number of ways, but I was already using element, so I just continued with it. However, I did not need to use the template module, instead htemplate was used to construct the DOM within the components.

element supports building up the custom element prototype via mixins instead of inheritance, and if multiple mixins define methods for the custom element lifecycle, element will chain them together.

Model construction

model is an object that mixes in an event emitter. There are lots of choices for event emitters. I used evt because it supports a .latest() concept, where it will call the listener if there is a current value for the event property, and for any future updates.

Binding the model to the view

element mixins are used to bind the model updates to a render() method on the component. A sample of the end result:

{
  render: require('../base_render')(['accounts', 'folders'], function(html) {
    var currentAccount = this.model.account;
    if (!currentAccount) {
      return;
    }

    html`
    <a data-prop="accountHeader"
       data-dclick="toggleAccounts"
       class="fld-acct-header closed" role="region">
      <span class="fld-acct-header-account-label">${currentAccount.name}</span>
      <span class="fld-acct-header-account-header"
            data-l10n-id="drawer-accounts-header"></span>
      <span class="fld-account-switch-arrow"></span>
    </a>
    <div data-prop="fldAcctScrollInner" class="fld-acct-scrollinner">
      <div data-prop="fldAcctContainer" class="fld-acct-container">
        <!-- The list of accounts -->
        <div data-prop="accountContainer"
             data-dclick="onClickAccount"
             class="fld-accountlist-container collapsed">
        `;

        // Add DOM for each account.
        if (this.state.accounts) {
          this.state.accounts.items.forEach((account) => {
            // Highlight the account currently in use
            var selectedClass = this.model.account &&
                                this.model.account.id === account.id ?
                                'fld-account-selected' : '';

            html`
            <a class="fld-account-item ${selectedClass}"
               data-account-id="${account.id}">
              <span class="selected-indicator"></span>
              <span class="fld-account-name">${account.name}</span>
            </a>
            `;
          });
        }

        html`
        </div>
        <!-- The list of folders for the current account. -->
        <div data-prop="foldersContainer"
             data-dclick="onClickFolder"
             class="fld-folders-container">
        `;

          if (this.state.folders) {
            this.state.folders.items.forEach((folder) => {
              var extraClasses = [];

              if (!folder.selectable) {
                extraClasses.push('fld-folder-unselectable');
              }

              var depthIdx = Math.min(FOLDER_DEPTH_CLASSES.length - 1,
                                      folder.depth);
              extraClasses.push(FOLDER_DEPTH_CLASSES[depthIdx]);
              if (depthIdx > 0) {
                extraClasses.push('fld-folder-depthnonzero');
              }

              if (folder === this.model.folder) {
                extraClasses.push('fld-folder-selected');
              }

              html`
              <a class="fld-folder-item ${extraClasses.join(' ')}"
                 data-type="${folder.type}"
                 data-folder-id="${folder.id}">
                <span class="selected-indicator"></span>
                <span dir="auto"
                      class="fld-folder-name">${folder.name}</span>
                <span class="fld-folder-unread"></span>
              </a>
              `;
            });
          }

        html`
        </div>
      </div>
    </div>
    `;
  })
}

If the accounts or folders update frequently, then I could create smaller custom elements that focus on displaying those pieces based on the individual model property changes. However, for the time that this UI is shown, those model properties will rarely change, so inlining the work to show both lists in one component is fine.

htemplate notes

htemplate supports passing non-string values to sub-elements by setting a property on the sub-element instead of setting a string attribute. This is useful to pass down objects, like the model object, to sub-elements. More details in the Property setting section of the htemplate readme.

Editor support for syntax highlighting HTML inside the tagged template strings helps with the string legibility. It would be great to get more editors supporting it as I expect it will become more common as tagged template strings get more visibility. Here are a couple of issues tracking that for different editors:

If you can, help your editor improve the display of a HTML string templates.

Summary

I like the feel of it so far. As with most technology choices, it is about the tradeoffs you can accept.

I am fortunate to be able to use ES2015 and custom elements natively, not an option for many projects. It is fun to play in the future natively, I am excited to see those pieces become widely available across browsers.

Not using a virtual DOM implementation requires more thought on rate of updates for a component. Instead of just letting a React render pass sort out the details, The rate of model update events should be considered more, and possibly creating smaller components that care about finer grained model updates.

A virtual DOM can allow the developer to be more carefree about this, at the possible cost of React creating a larger set of internal objects to do a diff when the model changes only affect a small portion of the UI.

There are some cases where I do not want to just blast the innerHTML of the custom element on every model update. For instance, a CSS animated spinner that is activated by a class change to an element. In that case, I do not want to reset the innerHTML as the animating spinner would appear to jump around and reset. In those cases the custom element may decide to check if the existing DOM has the class set correctly instead of always resetting the innerHTML. A more manual diff model.

On the flip side, those cases are small and scoped, and the overall bootstrap code size of the project is smaller, with less build machinery in place.

So, tradeoffs. To be clear, the React ecosystem has a lot to offer, but it has been fun exploring an alternate approach inspired by some aspects of it but with different tradeoffs.

amodro-trace and AMD loaders

Thu, 09 Apr 2015 20:26:35 GMT

A new tool, and some AMD loader rambling:

I have started a new project around AMD modules, amodro-trace. It is a tool that understands AMD modules and is meant to be used in other node-based build systems. The README has more background, but the general use cases that drove it:

Think of amodro-trace as a lower level imperative tool that something like the requirejs optimizer could use to implement its declarative API.

amodro-trace comes from some code in the requirejs optimizer, and has some smaller unit tests. I ran it over a larger project, but still expect to fine tune some things around API and operation, so feel free to give feedback in the issues list if the use cases fit your needs but have trouble using it.

I would also like to construct a new AMD loader, something that assumes more modern browsers and can improve on some things learned from requirejs.

I do not expect requirejs to go away any time soon, and it will still be my recommendation for general AMD loading across a wide set of browsers. There will still be maintenance releases, but I expect to do any new work that non-trivially modifies behavior to be done under a new name. This helps set stable expectations, particularly for tools that have been built on top of requirejs.

I still want to explore some things with AMD loaders though, particularly since an operational ES module system is still far off, and transpilers that guess at ES module syntax still benefit from good AMD loader options to back them.

AMD loader options

First, a bit about some AMD loader options that I have worked on. The nice thing about AMD modules is that there are more options besides this set, and other tooling around them. This is just about where and how I have spent my time in this space.

amodro loader

For a new AMD loader, I am thinking of putting it under the amodro (pronounced a-mo-dro) name. amodro-trace is the start of what I would see as its equivalent of the requirejs optimizer piece. amodro-trace currently uses requirejs under the hood for module tracing, but ideally that would migrate over time to a new loader.

I would not want to modify any of the AMD APIs for declaring a module or for the dynamic require calls. So no changes in module syntax to allow the most reuse of existing AMD modules.

However, I want to rethink some of the loader APIs and loader plugin APIs to do something like what an older draft of the ES-related loader had for a module lifecycle: normalize, locate, fetch, translate, instantiate. The loader plugin API as supported in requirejs-like loaders is not as granular, and supporting a more granular API would help with some issues that have come up with the loader plugins to date: it can be hard to break cycles for some loader plugins, and can make building more complicated.

The module loader mentioned above makes an attempt at that sort of solution for loader plugins, and it works out well. There is a good chance existing loader plugins would still work too since their APIs can be seen as a coarser API that could be supported by the more granular API. Still a bit of work to be done there, but it seems promising.

So I expect amodro would be like the module loader, but designed to work with the AMD APIs instead of the module API in that loader, and probably using some of the alameda ideas too.

I may not get to it though. Just sharing my thoughts around loader work. I have a day job that I really like, and we are doing some interesting work. There are some (non-loader) ideas I want to implement there, and I am excited to try out service workers in that context.

The Dojo folks are also thinking about this space, as well as John Hann and Tim Branyen, so other options may come out of their efforts too. It is good to have options.

End result, more in this space worth pursuing.

More convention over configuration

For AMD projects in general, and something that does not depend on any new loader work:

We can help improve the perception of difficulties with configuration by starting to advocate more for standard project layouts that avoid big configuration blocks for the loader. Effort in this space would likely benefit an ES module solution too, as it will need to operate in the same async network space that AMD modules operate.

To me, that means using a starting project layout that looks like this sample project. The lib directory could be a node_modules or bower_components directory.

adapt-pkg-main can be used after an npm/bower install to fix up the installed dependency to work with the file layout convention that works best for general web module loading, without running into CORS or 404 issues.

Then hopefully the package managers get better about these file layouts over time (maybe absorb what adapt-pkg-main does), and in the case of npm, remove some sharp edges for front end development.

Summary

You might try amodro-trace if its use cases fit your needs. While it comes from some code that has had a good amount of testing, it is still a new approach on it and may have some bugs, so I am keeping the version low for now. However, it is the kind of AMD build tool I would like to support longer term: provide a primitive focused solely on the AMD tracing and normalization so that others can build on top of it.

The requirejs optimizer was built at a time when node was not a thing yet, and more batteries needed to be included for command line JavaScript tooling. It has been a good approach for the requirejs optimizer: it runs in node, Nashorn, Rhino, xpcshell and even in the browser. It gives a bunch of communities a chance at some good AMD-based optimization options.

However, I do not expect to keep pace with all the possible variations in build tool styles with the requirejs optimizer's more declarative options-based approach. amodro-trace should be helpful for those cases.

Here's to more AMD loaders and tools for the future!

How to know when ES modules are done

Fri, 13 Feb 2015 21:49:08 GMT

There are few pieces of a module system that need to be available for it to be fully functional. I will describe them here and talk a bit about where ECMAScript (ES) modules seem to be at the moment, from an outside public perspective.

I am not on TC-39, the committee that works on the ES language specification (otherwise known as JavaScript, JS). Just someone who has worked on a few JS module systems.

This is a long piece. A table of contents for the top level sections:

Module system pieces

There are three main pieces of a module system:

Some might argue that these pieces are separable and could be specified by different standards groups. So an "ES module system" may not be the right term, as ES may only specify one or two pieces.

For me, they are all part of a coherent module system, so I will be referring to the future direction for them as the "ES module system", even if the URLs for each specification end up on different domains.

Static module definition

This is how you statically declare a piece of code as a module with dependencies. In this context static means: does not change depending on the execution environment. Static dependencies can be parsed out of a module without actually running the module in a JS environment, the loader just needs to parse the text of the module to find them.

In AMD modules, it looks like this:

define(function(require, exports, module) {
  // Statically parsable dependencies.
  var glow = require('glow'),
      add = require('math').add;
});

In CommonJS and Node (for shorthand's sake referred to as "CJS" for the rest of this post), there is a similar idea, just without the define() wrapper.

It is a bit more nuanced in CJS systems: the require(StringLiteral) calls are not parsed prior to execution, one of the major reasons that format is not fully suitable for a full module system on the front end, where async networking is involved. You can get some front end functionality by using something like browserify or webpack to do the static search for dependencies, but just for bundling. Fine enough for libraries but starts to break down on the app level where you want to incrementally load functionality as the user goes to use it, use a dynamic router.

In ES, it looks like this currently:

// Statically parsable dependencies.
import glow from 'glow';
import { add } from 'math';

ES also statically indicates the named export keys too:

// Statically parsable dependencies.
import glow from 'glow';
import { add } from 'math';

// Statically indicate this module will have a 'default'
// and 'other' export keys.
export default function() {};
export other funtion() {};

While this helps statically match up any keys given to the exported values to the ones used in import statements, the export value is not statically exported, just an indication of its name.

For AMD/CJS systems, there really is just one exported value per module, but it could be an object with multiple properties. There is no static analysis of the export value in those systems.

This part of the ES module system is the piece that is the most specified at the moment.

Inline modules

However, the ES system does not allow for what will be called "inline modules" for the purposes of this post. Inline modules are just the ability to statically declare more than one module in a file. This is commonly used for bundling modules together, but has other purposes.

In AMD, those are just named define()s:

define('glow', function(require, exports, module) {
  return function glow() {
  };
});

define('app', function(require, exports, module) {
  // Statically declare dependencies.
  var glow = require('glow'),
      add = require('math').add;
});

For CJS, there are conventions for doing this via tools like browserify and webpack, but they are much less declarative. The module IDs are converted to array indices/numbers. This makes dynamic module loading harder.

For ES there is nothing for this. The last I heard, the hope was for capabilities like HTTP2 and zip bundles so that no new language syntax is needed, however I believe that is not sufficient.

In the AMD/CJS world, it has become more common to deal with nested groups of modules bundled together. An example would be some browserified base libraries that are then combined with some AMD modules in an app. The browserified ones have a conceptual inner module structure that should not be visible outside the module.

AMD and CJS do not do well with this right now. I have considered supporting something like this in my AMD loaders to allow for it:

define(function(require, exports, module, define) {
  // The define passed in here is a local
  // define for modules only visible to
  // this module.
});

There are some interesting characterstics around how to define the module this way when it can have async resolved dependencies. That has been more fully explored in this module experiment, so I believe it can work.

The end result, I see modules now as units of code that can be nested. Similar to how functions work, but instead of identifiers for names, module ID strings are used, and their export may be resolved asynchronously, so a bit of syntax is needed for that.

The other option I have heard for ES would be to compile down the module into ES5 code, and use the ES module loader lifecycle hooks to get that into an ES6 module loader.

That option looks like a leaky abstraction. In addition, there are some tricks with the way ES6 imports are mutable slots and the syntax around getting to the execution-time module capabilities that require some extra thought.

Execution-time module capabilities

There are some properties and capabilities that need to be exposed during the execution of a module. This means it cannot be statically determined, it is only known once the module is executing in a JS engine.

In the AMD world, the execution-time capabilities come in these forms:

In Node:

(Synchronous return from that dynamic require is one of the reasons the CJS system is not the right fit for a general purpose front end module system in the browser.)

In ES, this piece is not formally specified yet. In the ES world, I believe this is referred to as the "module meta", if you come across that phrase. The most recent hint of how it might be done in ES looked something like:

import local from this;
console.log('Normalized module ID is: ' + local.id);
console.log('Normalized module URL is: ' + local.url);
local.import(aJsStringValue).then(Function(someModule) {});

I am making up the name of the properties for id, url, and import. I am not sure what their real names will be, just that from this, or some from-based form, was being considered as the way to aquire this functionality.

Module loader

This is an API that runs at execution time. It kicks off module loading, and allows ways to resolve module IDs to paths, handles the loading and proper execution order of the modules, caches the module values.

In AMD, the main module loader API is require([String], function(e) {}). There is usually something like require for top-level, default loader loading, and each module can get its own local require. Some AMD loaders can create multiple module loader instances.

It is common for AMD loaders to support the idea of a loader plugin, a module that provides a normalize and load method that are plugged in to the AMD loader's normalize and load lifecycle steps.

This allows extending the base loader to handle transpiled code without requiring plugins to be loaded up front, before main module loading starts.

In CJS, require(String) is the main API to the module loader. There is a way to extend the loader capabilities via require('module')._extensions['.fileExtension'] = function() {}. This requires the extension to be installed before modules that depend on it are loaded. This works fine in Node's synchronous module execution environment, but does not translate to async loading in the browser.

For ES, this part is still being defined. There was a previous sketch for it, but it seems like that is being redone now. I do not feel it is useful to link to the current attempt at the sketch because it is incomplete, and they likely want to work on it themselves to get it in a more usuable state before getting a lot of feedback about it.

The previous sketch did have the concept of a module loading lifecycle, and a way for userland code to plug in to that lifecycle, and I can see this concept carrying forward in some fashion:

The granularity of these steps are better than the ones in AMD loader plugins, which just have a concept of normalize and load. load is really locate, fetch, translate, instantiate in one method. It would be good to have more granular steps.

However, there was no built in way in the ES loader to know how to load the hooks as part of normal module loading.

For AMD systems, module IDs of the form pluginId!resourceId meant the loader would load the module for pluginId, then wire it into the loader lifecycle, then delegate to that plugin's lifecycle methods for IDs that begin with pluginId!.

That approach avoids a two-tiered loading system in a web page where the all the loader plugins are loaded first, and then continue with the rest of module loading. The two tiered approach is slower and breaks encapsulation. Any package that used a loader plugin would need to somehow get the plugin registered in the correct loader instance up front. It also gets tricky if those loader plugins have regular JS module dependencies.

Interlocking pieces

While the three pieces of a module system could in some way be considered separate, they all have interlocking pieces, and those pieces need to fit well together.

Module IDs

The rules around the module IDs needs to be understood for the pieces to work well together. If someone is just working with the static module definition part and just uses a plain path for the ID, that will likely conflict with the module loader part, since the IDs should be separate string concepts from paths to support conceptual string namespaces for things like loader plugins and packages that do not have direct path equivalents.

Loader extensions

This is tied a bit into the module ID coordination, but also involves module loader load order and how much a given module needs to know about how loader extensions (like transpilers) get wired into the system.

One option is to say that is something that is configured and wired up separately from the modules themselves, out of band, like via package config and some coordinated way to get those registered with a loader up front. This breaks encapsulation though, and makes it hard for the plugins to use modules for their own dependencies. The loader plugin approach in AMD is a much more sane way to go about it.

Execution-time module capabilities vs static module definition

In the ES sketch above, from this for the execution-time module capabilities is a specific language construct that needs to be built into the static module definition.

Loaders and execution-time module capabilities

The execution-time module capabilities also relate to methods on the module loader, like the capability to dynamically load code.

Where are we now?

I believe the plan is for the ES6 spec is to just contain the static module definition piece, and for the other bits to be specified in separate specifications coming later.

The trouble is people are starting to use the static module definition piece via transpilers, but without having the other interlocking pieces sorted out.

The transpilers often just compile down to AMD or CJS modules for actual use, and these have some differences with the likely final ES plan. The main issues are:

Module IDs are not sorted out

AMD has a stricter separation to module ID vs path, where CJS as practiced in Node is more file path based. IDs really need to be different things than paths. For regular JS modules, they can have an easy simple transform to a path, but need to be conceptually different.

Export models are different

The ES export module is different than AMD/CJS. In ES all exports are named. The name default just has some extra syntax sugar for import, but no sugar when the module is referenced via the execution-time module capabilities. Expect to be typing .default for that.

AMD/CJS exports are really just single exports, but those systems are nice enough to create an export object if you want to use the exports.foo = '' form of adding properties to the export object.

No execution-time module capabilities

There is no ES specification for the execution-time module capabilities. So there is no way with the ES syntax and APIs to build a dynamic router. You will need to know the AMD/CJS system you are using underneath to do that part.

What is meant by "dynamic router"? A module that looks at a piece of runtime path information (typically a URL segment), then translates that to a module ID for a view and dynamically loads that view via module APIs (either require([varName]) in AMD or require(varName) in CJS).

Dynamic routers are really handy to avoid loading all possible routes and views up front, helps with performance.

Using the module ID via module.id is useful in cases where there are global spaces, like the DOM, and the module wants to construct class names, DOM data that will be in that global space. Basing its values on the module ID helps scope selectors and data access for that module.

No static definition to allow inline modules

This is a big missing piece in ES. Right now, expect to use AMD/CJS approaches here.

Hazards on the way to done

So, do not consider the ES module system done that with the publication of the ES6 spec. It just has one part of the system, and in many ways the most straight-forward piece. It is somewhat complicated by all the forms for export and import, but that was a design choice given TC-39's goals.

The real action comes with the module loader parts: if that is worked out, you might be able to skip the ES6 static definition parts.

So hopefully the other parts of the module system will come along. Some hazards to avoid on the way:

Summary

Making a module system for ES is hard, and it is not done yet. I wish the process would have been different to date with more dialog outside of TC-39. However, it seems like the people working on it are just not done with all the pieces. I can appreciate it is hard to talk about it until the fuller picture is worked out.

The unfortunate part for me is seeing people starting to use the ES6 static module definition and transpiling to ES5 module systems to ship code. I think it is just too early to do that.

In the grand tradition of languages that can transpile to JS, you can get something to work and ship code to users. You can use CoffeeScript too. So if you are having fun with the transpiling route, that is great. Just know the sharp edges.

You are adding another layer of abstraction on top, and in the case of modules, you will likely need to directly use or know the properties of the ES5 module system you are using underneath to get the full breadth of module system functionality.

For me, fewer layers of abstraction are better. I will be waiting until more of the ES pieces are defined and shown to work well together before considering them done and using it to ship code.