Writing specs for keybinding events


My keybinding structure is pretty simple.

In the keymaps file there are 2 entries:

  'ctrl-up': 'my-package:do-something'
  'ctrl-down': 'my-package:do-something'

In the package activation:

activate: (state) ->
  atom.commands.add 'atom-text-editor', 'my-package:do-something': (event) => @doIt event

doIt: (event) ->
  directionUp = event.originalEvent.keyIdentifier == 'Up'
  if directionUp

In my method I can then recognise which keys called it and I can differentiate its behaviour.

Is there any proper way to test it? I was not able to properly trigger a keyboard event, so In my specs I had to do something like this:

it 'should test that', ->
  pkg = atom.packages.getActivePackage 'my-package'
  pkg.mainModule.doIt({originalEvent: {keyIdentifier: "Up"}})
  # test my stuff here
  pkg.mainModule.navigateHistory({originalEvent: {keyIdentifier: "Down"}})
  # test other stuff here

Of course, this is not a proper test, since I’m not really checking that method is called after an hotkey, but I’m just explicitly calling that method with a extra-lame mocked event.

Any suggestions?


To be clearer, the package is this, and the full spec (still not submitted) is this:

it 'should fetch the previous and next commands in the history', ->
  testEditor.insertText '1 + 2'
  triggerEvaluation ->
    pkg = atom.packages.getActivePackage 'atom-math'
    pkg.mainModule.navigateHistory({originalEvent: {keyIdentifier: "Up"}})
    expect(testEditor.lineTextForBufferRow(2)).toBe '1 + 2'
    pkg.mainModule.navigateHistory({originalEvent: {keyIdentifier: "Down"}})
    expect(testEditor.lineTextForBufferRow(2)).toBe ''
    testEditor.insertText '3 + 4'
    triggerEvaluation ->
      pkg.mainModule.navigateHistory({originalEvent: {keyIdentifier: "Up"}})
      expect(testEditor.lineTextForBufferRow(4)).toBe '3 + 4'
      pkg.mainModule.navigateHistory({originalEvent: {keyIdentifier: "Up"}})
      expect(testEditor.lineTextForBufferRow(4)).toBe '1 + 2'


My suggestion is to only test your code … conversely, don’t test code that isn’t yours. What this means is that if you depend on a piece of infrastructure (whether that is keybindings in Atom or memory allocation in the OS or that data is stored in and retrieved from a database) you don’t need to write tests that verify it works in a known way. Your tests should just assume that it does.

What this suggestion means for your code and your tests is that you should probably just have two separate commands. Then you can test that the commands do what you want them to and assume that if you map keys to those commands the right thing will happen.

The other benefit to this is that if the key event scheme changes in some future version of Atom, your package has a greater chance of continuing to work because your package isn’t tightly coupled to the specific structure of the key event.


That’s a valid point. I don’t really want to test what Atom ships within, but it could be probably good also to cover the keybinding declarations in the test. So, in an end-to-end fashion, I wanted to simulate some keystrokes and check the outcome on the buffer.

You’re right about declaring 2 separate methods, I will probably do that, still it would have been nice to have a neat way to execute some keyboard actions from the specs.


It could but it also means that the tests are more brittle. Some examples:

  1. If you change the key bindings, you have to update the tests
  2. If you have different key bindings for different platforms, you now have to detect what platform the tests are run on and validate the proper keys for that platform
    1. If any of the keys change for any of the platforms, you have to update the tests
    2. You could have a bug in your platform detection code that causes your tests to have a false negative
    3. You have to figure out how to run the tests on all platforms every time you change the code
  3. If you come up with some scheme that parses the key binding definitions so the tests don’t have to be updated when key bindings change, you could have a bug in your parsing code that causes your tests to have a false negative

Determining where to draw the line in what to test is often an important cost/benefit balancing act in testing.