Browser and Node integration concepts


I’ve been able to get a sample application working with atom-shell.

What I would like to do is create a very robust uploader (eg. will continue even after a system reboot) that synchronizes a user-selected folder of files with a backend service.

Ideally, I would like to separate the browser side where the source directory and destination location can be selected from the upload continuation logic (eg. read the file data for the files that have not yet been uploaded from the file system lazily when uploading starts).

1) Problem 1 - interaction paradigm with the browser/node

I found this article which implies that IPC should be used to communicate between the node entry application and the browser window: Inconsistency between node and atom-shell behavior

I am trying to keep the browser code separate from the node filesystem integration, what is the best way to decouple these aspects in atom-shell? Is RESTful HTTP using express possible/recommended or should IPC or something else be used? Or are most atom-shell applications built as a hybrid browser/node approach without this separation rather than MVC?

2) Problem 2 - constraints and integration paradigm with the file system

I’ve been Googling to try to understand the paradigm for how to interact between the browser and file system code.

I found this article which implies there is some level of sandboxing and maybe that files are read-only:

I found this article that says there is a way to know exactly where the file is on the file system using the path variable (docs/api/

Is there some sort of sandboxing of applications with the atom-shell where there would be a discontinuity between what files can be selected in the browser process and the ones that can be accessed by the node process via the path variable? If so, I was thinking about copying the file data into a cache inside the sandbox from the browser using base64 or some equivalent format, but is the file system access using Node’s fs module read-only?

3) Problem 3 - using pre-compiled JavaScript in the browser

We use webpack to compile css and html into JavaScript in our traditional web application and bower to distribute our client modules. Removing this workflow is possible, but ideally, we would like our atom-shell and browser applications to use a single workflow.

When I include the pre-compiled JavaScript files using standard script with src tags, I lose access to the node context: in a simple script tag with no src attribute, process.env has the Node environment, but with a src attribute on the script tag, process.env is an empty object.

Does anyone know the reason why the process.env is different depending on the way the JavaScript is loaded into the page? Does anyone have workflow recommendations for using a compilation process (CS -> JS, Jade -> HTML, Stylus -> CSS) with atom-shell?

Thank you everyone for your help. I’d be happy to write this up for atom-shell for their docs.


I’ve just been doing some experimentation with atom-shell myself the past few days …

From what I understand (though I admit I may be wrong, the code I’ve been working on suggests otherwise), there is the “browser” process and the “client” process. The browser process is the part that is started when you launch an atom-shell application from the command line. It is the part that interacts with the OS UI API to create the native window (see the atom-shell BrowserWindow class). When the native window is created, then the web page is loaded into it (see BrowserWindow::loadUrl) … which starts the client process. To my understanding, these are separate OS-level processes … you must use IPC to communicate between them. (See the ipc and remote libraries.)

The flip side of all of this is that Node is available to both processes. See this quote from the atom-shell api Synopsis document:

The web page is no different than a normal web page, except for the extra ability to use node modules:

This includes the fs module and unfettered access to the file system. From what I’ve been able to see, all of Atom, beyond some things that need access to OS-specific UI APIs, runs inside the client process … the web page:

The AtomWindow class uses BrowserWindow to create the native window and load the web page:

As you can see here, the web page is static/index.html:

static/index.html leads to static/index.js:

Which in turn loads the “bootstrap script”:

Which is what creates the atom global that is available to all Atom API and package code:

Unless the browser side has its own copy of the atom global (which I haven’t been able to find), this means that:

  1. All Atom package code runs inside the web page
  2. Atom packages write to the file system without restriction
  3. There is no sandboxing of the web page in Atom applications (with regards to the file system)

QEV: No need to use the browser process to access the file system and nothing is forcing you to use IPC as a mediator in that case

How does main process start render process in Atom?
Remote, browser, view and execution context
High-level Architecture
"New file" or "New window" option in OS X Dock application context menu

@leedohm thank you for looking into this!

For 2), I’ve been able to successfully confirm I can load files from anywhere on a Mac meaning no sandboxing (at least with a non-AppStore distributed application). Thanks!

For 3), I found that webpack supports atom-shell using the configuration option ```target: ‘atom’``. If anyone is interested:`

For 1), I’m wondering what the best paradigm is for atom-shell apps in order to share code since we are already using bower, webpack, and RESTful HTTP JSON to connect to our Node services. If people who have played with various approaches have some battle-tested, best practices for structuring an atom-shell app, I’d love to hear what they have done to keep things consistent.

Any recommendations on 1) would be much appreciated!