Mouse commands


#1

It would be nice to have documented ways of integrating the mouse with atom.

My first to suggestion to achieve this is to document the texteditor.view.component method

screenPositionForMouseEvent

and the texteditor.view method

screenPositionForPixelPosition

Specifically im interested in the input of these two methods.
Also it would be really nice to have something that take global screen coordinates and return screenPosition. As far as i have tried, screenPositionFromMouseEvent uses some atom specific coordinates that does not match with global screen coordinates.


#2

That function returns a Point, which is Atom’s coordinate system for file contents. In particular, it returns the screen point (tied to display layer), which can then be converted to a buffer point (tied to actual text, including hidden parts).

Also, you’re likely to get a better result on the Slack channel or by raising an issue on the Atom repo. Atom team members are more active there. (Normally I wouldn’t recommend raising an issue on the repo, but I’ve seen API related issues get good reception there before).


#3

I will edit to clarify. Im not talking about the output. The buffer/screen position system, including points and ranges is very well documented. im talking about the inputs. how does the ForMouseEvent method use a mouseEvent i.e. which coordinates, and how are they related to global screen coordinates? Also, how does the pixel coordinate system work?


#4

I get that this stuff isn’t explicitly documented (and I would also appreciate it if it were), but I feel the people working on this are just prioritising other things. The general consensus is that the code is self documenting, at least enough for someone to understand it with some familiarity with Atom.

For example, the input to the screenPositionForMouseEvent is the mouse event itself, which is first passed to pixelPositionForMouseEvent and then screenPositionForPixelPosition. The first function then does the following

pixelPositionForMouseEvent ({clientX, clientY}) {
    const scrollContainerRect = this.refs.scrollContainer.getBoundingClientRect()
    clientX = Math.min(scrollContainerRect.right, Math.max(scrollContainerRect.left, clientX))
    clientY = Math.min(scrollContainerRect.bottom, Math.max(scrollContainerRect.top, clientY))
    const linesRect = this.refs.lineTiles.getBoundingClientRect()
    return {
      top: clientY - linesRect.top,
      left: clientX - linesRect.left
    }
  }
  • It takes the clientX and clientY properties of the event

  • It then uses some information about the TextEditor view to convert to a pixel position taking the top left of the TextEditor view as the origin.

And this makes sense; calculating the screen position (the Point) would need to be done relative to the pixel position in the TextEditor view. Global coordinates would be useless (as you have found already) because the TextEditor view could be anywhere within the window.


#5

My understanding from previous threads is that undocumented functions aren’t guaranteed to be supported long-term. They’re works in progress or experiments that the dev team hasn’t committed to.