Hi, and thank you for your prompt response. Replies inline.
1. CPU utilization isn't something which can be easily computed or
reasoned on asymmetric multi-core CPUs, not to mention the dynamic
adjustment of CPU frequency further complicates the matter.
Are you alluding to CPU architectures like big.LITTLE? and of dynamic clock frequencies?
This proposal takes this into account, here's how:
The utilization number returned aggregates how time was spent by the active cores in various states since the last time a sample was taken.
i.e. for each active core, the time spent in idle/non-idle states are compared and aggregated, to give an amount representative of the overall system state.
This gives an idea of how system state changed over time, and takes into account asymmetric multi-core CPU architectures.
As for dynamic clock frequencies, CPU speed also factors that, also with data aggregated over all active cores.
As with utilization, each active core's clock frequency ranges and state are noted, (i.e. minimum, base, maximum and current clock speed) and aggregated.
This gives an overall idea of the system's clock speed, even if the cores may have different clock speeds.
Most particularly, the `cpu_speed_limit` concept, which returns a number which represents where the clock speed lies within its available range.
The goal of CPU Speed is to abstract away those details and provide clock frequency data that is actionable and yet preserves user privacy.
2. Whether the system itself is under a heavy CPU load or not should not
have any bearing on how much CPU time a website is entitled to use because
the background CPU utilization may spontaneously change, and the reason of
a high or a low CPU utilization may depend on what the website is doing;
e.g. a daemon which wakes up in a response to a network request or some
file access.
On the web, people write one-size-fits-all applications. Those need to run across devices of varying compute power capabilities.
For compute-intensive applications, this poses problems and leads to bad UX in the absence of hints from the system.
The proposal does not enforce any additional usage of resources, but instead allows apps to make informed decisions.
It is common for compute-intensive applications to self-throttle to provide a good UX.
One example is gaming: reducing drawing distance, effects, texture sizes or level of detail for geometries if it's affecting frame rate.
On the Nintendo Switch, game engines have the feature to reduce the rendering resolution when framerate drops occur or are anticipated.
On the Nintendo Switch, the compute power capability depends on whether the device is plugged in or in portable mode.
There might also be thermal factors.
For the Compute Pressure API, we've examined a few use-cases, and they are detailed in the explainer.
This is similar to video conferencing needs, reducing the number of simultaneous video feeds, or diminishing the processing of image processing effects.
Indeed, the reason for the load might be intrinsic to the application, or extrinsic, based on what's going on with the system.
This API, as proposed, provides system-level hints that help applications make informed decisions to provide a good UX for users.
3. The proposal as it currently stands seems to allow a side channel
communication between different top-level origins (i.e. bypasses storage
partitioning). A possible attack may involve busy looping or doing some
heavy computation in one origin and then observing that CPU utilization
goes up in another. We've reported an attack of a similar nature in
https://bugs.chromium.org/p/chromium/issues/detail?id=1126324.
Thanks for that insight. I'll look into it.