Composing SpacePen views with React


I have stumbled upon a tricky use case with React/SpacePen interop. Specifically, I am trying to make a React-based autocomplete component. I have made a lot of progress so far, but now I am having trouble wiring up the keybindings.

The official autocomplete package from Atom uses the built-in SelectListView, which does not use React. It uses an EditorView as a subview (this happens in its @content method). It appears to be able to use either the SpacePen-based-view defined in or the React-based-view defined in as its EditorView. Both are subclasses of View from SpacePen, which is why they can be plugged in as a subview of SelectListView.

In SelectListView::initialize(), it adds listeners for higher level events, such as core:move-up. This is attractive because it avoids requiring the package author to create custom keybindings for the same concept. Note that the author of autocomplete-plus does this in his keybindings/autocomplete-plus.cson file:

".autocomplete-plus input.hidden-input":
  "tab": "autocomplete-plus:confirm"
  "down": "autocomplete-plus:select-next"
  "ctrl-n": "autocomplete-plus:select-next"
  "up": "autocomplete-plus:select-previous"
  "ctrl-p": "autocomplete-plus:select-previous"
  "escape": "autocomplete-plus:cancel"

This is unfortunate because if a user adds her own keybinding for core:move-up, then it will not work when she tries to move upward in the list for an autocomplete-plus widget, even though it will work everywhere else in Atom.

I was wondering why someone would set things up this way, though I looked at autocomplete-plus’s equivalent to SelectListView, which is Instead of using an EditorView to catch keyboard events, it uses an ordinary hidden <input> element. Presumably it cannot register a listener for things like core:move-up on there.

In summary, I would like to be able to listen for keyboard events that are mapped to core:move-up without using SpacePen. My current thought is to do something like:

@content: ->
  @div class: 'dummy-parent', =>
    @subview 'filterEditorView', new EditorView(mini: true)
    @div class: 'parent-for-react-component'

such that my View can do @on 'core:move-up', => @myHandler, but I can also render into .parent-for-react-component. This seems like a bit of a hack, though.

I know that in Moving to React, there is a stated goal of “exploring some ideas for making Atom more view-framework-agnostic for packages,” but I’m not sure if it’s practical to expect to get multiple view systems to coexist. There is already some friction between React and SpacePen when it comes to how events are handled. I’m curious what the current thinking is, and if there’s an existing solution to using a SpacePen -> React -> SpacePen component hierarchy.


Also, it’s a little weird that the autocomplete widget displays a text input at the top. Is this something that was necessary to trap keyboard events, or was this a conscious UI decision? Note that autocomplete-plus (and most autocomplete widgets in modern IDEs) do not have such an input: the text the user types appears in-place in the buffer.


I might get flamed for this but here goes a few thoughts.

  1. I haven’t had much success with the completes in Atom while using vim-mode. This might just be me. I might have stuff set up wrong?

  2. I’ve seen one or 2 packages try to use inputs as backdoors for input. An alternate way is to register key commands for all keyboard characters one might enter at a certain state. This might not be ok for autocomplete (haven’t thought it out fully) but it worked ok for Jumpy. One thing I do wrong that I need to improve is that if you cmd shift P it shows them to you. But you can create these dynamically, which is what someone can do for autocomplete for example.

Swapping keymap objects was rather fast performance wise from what I saw. You can store cloned versions and restore previous sets. I know it’s a bit of dark arts and not ideal. At the time I found I had to do it to win over events like ‘g g’ in vim (chorded key maps etc.)

Anyway, these are just some thoughts to take with a grain of salt. Of course the normal practice of inducing a ‘mode’ is by tacking a class onto the editor and using that in the specificity chain accordingly.