Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GL acceleration doesn't work in multiprocess #24211

Open
asajeffrey opened this issue Sep 13, 2019 · 9 comments
Open

GL acceleration doesn't work in multiprocess #24211

asajeffrey opened this issue Sep 13, 2019 · 9 comments

Comments

@asajeffrey
Copy link
Member

Viewing video with --pref media.glvideo.enabled works, yay! With --multiprocess --pref media.glvideo.enabled it produces a white screen.

Probably what's going on here is that the script thread is creating the GstGLcontext for the player, which is then being used in the gstreamer render thread, which is fine when they're in the same process, but not in multiprocess mode.

@asajeffrey
Copy link
Member Author

@asajeffrey
Copy link
Member Author

cc @ceyusa

@asajeffrey
Copy link
Member Author

So script gets its media player

let player = ServoMedia::get().unwrap().create_player(
&client_context_id,
stream_type,
action_sender,
renderer,
Box::new(window.get_player_context()),
);

by calling ServoMedia::get()
https://github.com/servo/media/blob/a70f02482d29472c5566e16ffa934fda909443bb/servo-media/lib.rs#L83-L89

which returns a per-process media back end. The script-thread's media back end is not the same as th compositor's back end, so unsurprisingly content in one doesn't show up in the other.

@asajeffrey
Copy link
Member Author

This is pretty serious, as we can't ship a browser that's hardened against Spectre without multiprocess. cc @avadacatavra

@gterzian
Copy link
Member

gterzian commented Sep 14, 2019

A fix for this is proposed as part of in #23807 (comment)

The architectural sketch is that while the "audio rendering thread" should run inside a script-process, the actual media backend should run in it's own process, or in the "main process" alongside the constellation and the embedder and the compositor.

In such a setup, "starting a rendering thread" in script will be a different operation from "starting a media-backend". A "media-backend" should probably be started only once, and kept as a reference by the constellation, and then each time a script creates an audio rendering thread, it should be hooked-up with the backend via an initial workflow using the constellation, resulting in setting-up a direct IPC link of communication between the rendering thread and the backend.

@gterzian
Copy link
Member

per your GL context question, we smuggle GL context pointers as usize values

I guess this is slightly different from audio rendering in the light of how GL contexts are shared with script. Could we not proxy the GL calls to the backend over IPC, versus sharing the context directly in script?

In any case, I think the overall idea would still be that the "backend" runs in a different process(probably the "main process"), from the "rendering thread", which runs in script. I guess that implies all sorts of changes for the interfaces between the backend and the rendering thread, and I have only looked at the audio part so far.

@asajeffrey
Copy link
Member Author

Yeah, I was expecting the GL context for media to be treated like WebGL, where there is a media thread that owns the GL context, and script communicates with it via IPC.

@ferjm ferjm added this to To do in Media playback Sep 16, 2019
@ceyusa
Copy link
Contributor

ceyusa commented Sep 16, 2019

While developing the GL rendering, I thought, for a second iteration, a design similar to #23807 (comment)

  1. In the embedder process get the ServoMedia instance, which has a new trait method to set the GL context and the native display, which will create, if possible, the wrapped GstGLContext and keep it.
  2. The embedder process will launch a thread where all the players will be created/used/destroyed. The idea of the hash associating the player with the origin would be interesting.
  3. The IPC sender will be shared to the constellation
  4. A proxy player API in the script thread will be offered to create/use/destroy players concealing the IPC sender
  5. When a player is instantiated in the content process, it will check if ServoMedia has a GstGLContext, if so, clone it and pass it to the elements that required through the GStreamer sync bus.

That quite similiar, AFAIU, with current WebGL. What I don't like is the replication of the proxy player API.

@gterzian
Copy link
Member

gterzian commented Sep 26, 2019

@ceyusa Do you think such a second iteration of the GL rendering would also have to include a general restructuring of media, including audio, or could those be separated? I guess some parts, like the equivalent of ServoMedia::get(), will require work across the board.

I haven't looked into the GL rendering at all, so I have no idea. I do have a general idea of how to split the audio backend from the rendering thread, as described at at #23807 (comment), and I see that as a prerequisite to implementing AudioWorklet.

So since restructuring the GL part, and the audio part, probably will influence each other, I'm wondering how to organize the work around restructuring of media into a backend running somewhere alongside the constellation, and a part(for audio, the "rendering thread") that would run inside script.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Media playback
  
To do
Development

No branches or pull requests

3 participants