I get that this stuff isn’t explicitly documented (and I would also appreciate it if it were), but I feel the people working on this are just prioritising other things. The general consensus is that the code is self documenting, at least enough for someone to understand it with some familiarity with Atom.
For example, the input to the screenPositionForMouseEvent
is the mouse event itself, which is first passed to pixelPositionForMouseEvent
and then screenPositionForPixelPosition
. The first function then does the following
pixelPositionForMouseEvent ({clientX, clientY}) {
const scrollContainerRect = this.refs.scrollContainer.getBoundingClientRect()
clientX = Math.min(scrollContainerRect.right, Math.max(scrollContainerRect.left, clientX))
clientY = Math.min(scrollContainerRect.bottom, Math.max(scrollContainerRect.top, clientY))
const linesRect = this.refs.lineTiles.getBoundingClientRect()
return {
top: clientY  linesRect.top,
left: clientX  linesRect.left
}
}

It takes the clientX and clientY properties of the event

It then uses some information about the TextEditor view to convert to a pixel position taking the top left of the TextEditor view as the origin.
And this makes sense; calculating the screen position (the Point) would need to be done relative to the pixel position in the TextEditor view. Global coordinates would be useless (as you have found already) because the TextEditor view could be anywhere within the window.