I’ve got a library that outputs video given an HWND - I understand that HWND is exposable in Electron, but what I’d like to do is contain the video to a region in a window. Is there a way to do this?
Did you have any luck with this? I’m looking to do something similar on Linux with GStreamer given an XID. Directly drawing on the window seems to be ineffective because it is hardware accelerated (I think).
No luck so far. I have had to resort to wrapping the C++ in a Node.js addon, taking discreet screen captures, and then pasting the images into the window as video. It works pretty well, actually, but it’s not ideal.
Looking here, you can probably do something like this:
Render the data in the buffers as they come in. The library I have has no such ability like the appsink, it just has hwnd or capture image, so I’m out of luck unless I do some hacks to fake a window and hook the video then wrap that in my own kind of buffered callback.
Yeah, I have/had the same concerns. One video is fine, but I’ll be doing 2 to 4. I’m working on expanding the 1 video to multiple right now, and I’ll have to benchmark. The images come in via events right now.
Let me know how it goes, though! Very interested!
I think I’m first going to try playing the streams via the HTML5 video element. I don’t have a great idea how much overhead TCP will add to all of those streams, though.
In my application, there’s a shmsink (shared memory) available for each of the videos, so similar to your situation, the most efficient may be to figure out how to parse that and render the videos. Does your system draw the images to an HTML5 canvas or are you just updating an image?
Draw Offscreen Rendering Buffer to Another Window / Canvas
I managed to spin off a couple of threads in the C++ world that pull images at the desired frame rate, convert to JPG, base64 convert, and then send them up to Electron via eventing. Two videos seems to work just fine so far. It’s a little heavy on the CPU load, but it’s not a great deal more than just running the OCR engine on two feeds on its own, it’s mostly that I have a low power Ultrabook CPU in this laptop. Still, I wish there were another, easier solution.
I tried rtmp streaming, but there was too much latency, up to a few seconds. There may be some streaming configuration options I was missing, but I couldn’t get anything responsive enough.
I wasn’t able to figure out how to pull from a shared memory space since I didn’t have a good way for Electron to access it. Most of the stuff I’ve found all comes back to this concern in the WebChimera project. Because of the GPU process being separate from the rest of everything, almost all implementations will be penalized by an extra memcpy to get the frame data into chromium’s render process. Since I’m using GStreamer, I thought the GStreamer WebChimera variant might be a good place to start, but I had a hard time trying to get the example to run, and the repository does not seem active enough to count on future development.
I’m actually being forced towards QML (if I could get PyQT to run QML without seg. faulting), which is a shame.
Did you try using win.getNativeWindowHandle()? I realize you wanted to contain it to a region but I’m curious to know if you tried passing the HWND returned and if there were any complications (other than not being able to contain it to a region).
I’m working on a project where I need to do something similar. Our current thinking is to create a window to draw the video in and then overlay decorations in another transparent child window.
Yes I did. I was able to retrieve the window handle, but I couldn’t get anything to draw to the window. If you get it to work on your end, please, let me know how you did it! My brute force method was a real pain - I had to develop a custom node.js addon that interfaced with the C++, and basically just did frame grabs/translation of those frames into displayable images/pasting the images.