A Peek at SurfaceControlViewHost in Android R
One of the items that I found interesting in the second half of my R DP2 random musings was SurfaceControlViewHost
. I experimented with it this week, and it at least partially works. In a nutshell: one app can embed and display a live UI from another app.
Wait, Wut?
For some developers, this sort of cross-app UI embedding has been "the Holy Grail" for years. You can do a limited version of this with RemoteViews
, but the widget set is minimal by modern standards. You could create your own RemoteViews
-like structure, but keeping all of the participating apps in sync can get troublesome. Android 9's slices… well, OK, those never really caught on.
But, with Android R and SurfaceControlViewHost
, it is not that hard to set up cross-process UI delivery. There are no obvious limits as to what that UI can look like, because the UI itself is not really shared. Instead, the two processes seem to be sharing a Surface
, with the UI-supplying process rendering a view hierarchy to that Surface
and the UI-hosting process displaying that Surface
as part of a SurfaceView
.
How Do You Make It Work?
I'll have code available on Monday, as part of the Elements of Android R release. But, here are the basic mechanics:
-
Have two apps, with some sort of IPC channel between them. I elected to use a bound service, playing with Google's
Messenger
pattern for getting data between the apps. In the source code, you will see anEmbedServer
and anEmbedClient
module that represent these two apps. -
Have the UI client (
EmbedClient
) set up aSurfaceView
and identify theDisplay
on which thatSurfaceView
will appear. Then, it needs to send to the other app the dimensions of theSurfaceView
, the ID of theDisplay
to use, and a "host token" obtained from theSurfaceView
viagetHostToken()
. All of those can be stuffed into aBundle
for easy delivery via common IPC patterns (e.g., as part of aMessage
). -
Have the UI provider (
EmbedServer
) set up that UI, such as via view binding. When it receives the details from the client, it can set up aSurfaceControlViewHost
tied to theDisplay
and "host token". It can then attach the root view of the view hierarchy to theSurfaceControlViewHost
viaaddView()
. Then, it needs to obtain aSurfacePackage
from thatSurfaceControlViewHost
(viagetSurfacePackage()
) and send that back to the client.SurfacePackage
isParcelable
, so you can send it via any common IPC mechanism (e.g., as part of a returnMessage
). -
Once the client receives the
SurfacePackage
, attach it to theSurfaceView
viasetChildSurfacePackage()
.
And that's it. At this point, the client should be showing the provided UI in the SurfaceView
. If the provider updates that UI, the client should show the updates in real time.
What About Input?
The docs indicate that touch events on the SurfaceView
should get sent from the client process to the provider process, with the implication that this will trigger events on the widgets in the provider's view hierarchy.
Unfortunately, I could not get that part to work.
That's quite possibly a bug in my experimental code. There is very little documentation on this, and I may have missed a step somewhere. Otherwise, it's possible that there is a bug in DP2.
What's Google Going to Do With This?
I have no idea.
Seriously, they could use this for:
-
A richer replacement for app widgets and slices
-
A richer option for custom views in notifications
-
Embedding any of their apps in any other one of their apps (e.g., more powerful options for launching a Hangout from Calendar)
But, my guess is that whatever they have in mind will be something I won't expect.
What Can We Do With This?
Well, not much, insofar as this is only available on Android R. Since this requires new methods on SurfaceView
, my guess is that this cannot be backported via a Jetpack library. For the time being, approximately 0.0% of your user base is running Android R.
However, longer-term, this opens up some interesting possibilities.
From a security standpoint, this technique should allow for us to better sandbox untrusted content. We have had options for doing that, with dedicated low-permission processes, but they had only classic IPC ways of getting information out of the sandbox. Now, they can present a full UI, yet still not have any means of attacking the client displaying that UI.
Apps with a rich third-party ecosystem of plugins could adopt this for incrementally tighter integration with those plugins. Right now, the only easy thing is for the app to start an activity in the plugin, if the plugin needs to supply UI. Otherwise, you were stuck with the UI integration options I mentioned earlier, like RemoteViews
. Now, though, a plugin can provide finer-grained UI elements that could be embedded in the core app's UI, to offer a more seamless experience to the user.
Assuming that there is no significant performance overhead for delivering a UI this way, this opens the doors for popular content publishers to get their content embedded in other apps, yet still maintain complete control over that content.
But, once again, my guess is that the best use of this tech is something that I am not currently thinking of.
Ordinarily, I would have expected presentations on this at Google I|O. Now, in our I|O-free world, I do not know when or how Google might provide more information on this API and how they (and we) might use it. But, it's something that I will be keeping an eye on, as it's one of the more intriguing new additions in Android R.
#Google #Android #Smartphones #OS #News @ndrdnws #ndrdnws #AndroidNews