Skip to content

Why multicore needs virtualization

2009/08/31

System virtualization is increasingly being used in embedded systems for a variety of reasons, mostly anticipated in a paper I wrote last year. However, the most visible use case is probably still processor consolidation, as exemplified by our Motorola Evoke deployment. Given that the incremental cost of a processor core is shrinking, and likely to go to zero, this makes some people think that the use of hypervisors in embedded systems is a temporary phenomenon, which will become obsolete as multicore technology becomes the standard. These people are quite wrong: in embedded systems, multicore chips will depend on efficient hypervisors for effective resource management.

In order to explain this prediction, let’s look at a few trends:

  1. Embedded systems, particularly but not only in the mobile wireless space, tend to run multiple operating systems to support the requirements of different subsystems. Typically this is a low-level real-time environment supported by an RTOS, and a high-level application environment supported by a “rich OS”, such as Linux, Symbian, or Windows. This OS diversity will not go away, it will become universal.
  2. Energy is a valuable resource on mobile devices and must be managed effectively. Key to energy management is to provide the right amount of hardware resources, not more, not less. The most effective way of reducing energy consumption on a multicore is to shut down idle cores—the gain far exceeds that possible by other means such as dynamic voltage and frequency scaling (DVFS). This gap will become more pronounced in the future: on the one hand, shrinking core voltage squeezes the energy-savings potential of DVFS, while on the other hand, increasing number of cores mean that the energy-saving potential of shutting down cores increases while becoming at the same time a more fine-granular mechanism.
  3. Increasing numbers of cores on the SoC will encourage designs where particular subsystems or functionalities are given their own core (or cores). Some of these functions (e.g. media processors) will use a core in an essentially binary fashion: full throttle or not at all. These are easy to manage. However, other functions impose a varying load, ranging from a share of a single core to saturating multiple cores. Managing energy for such functions is much harder.

Because of point (2), (3) is best addressed by allocating shares of cores to functions (where a share can be anything from a small fraction of one to a small integer). Sounds like a simple time-sharing issue: you have a bunch of cores and you share them on demand between apps, turning off the ones you don’t need. Classical OS job, right?

Yes, but there’s a catch. Multiple, in fact.

For one, existing OSes aren’t very good at resource management. In fact, they are quite hopeless in many respects. If OSes did a decent job at resource management, virtualization in the server space would be mostly a non-event (in the server space, virtualization is mostly used for resource management). Embedded OSes aren’t better at this than server OSes (if anything they are probably worse).

Now combine this with point (1) above, and you’ll see that the problem goes beyond what the individual OS can do (even if the vendors actually fixed them, which isn’t going to happen in a hurry). In order to manage energy effectively, it is possible to allocate shares of the same core to functionality supported by different OSes.

Say you have a real-time subsystem (your 5G modem stack) that requires two cores when load is high, but never more than 0.2 cores during periods of low load. And say you have your multimedia stack which requires up to four cores at full load, and zero if no media is displayed. And you have a GUI stack that uses between half and two cores while user interaction takes place (zero when there’s none). Clearly, while the user is just wading through menus, only about 2/3 of the power of one core is required, but there are still two OSes involved. Without virtualization, you’ll need to run two cores, each at half power or less. With virtualization, you can do everything on a single core, and the overall energy use of the single core running at 2/3 of its power will be less than the combined energy gobbled up by two cores running on low throttle. (And on top you have the usual isolation requirements that make virtualization attractive on a single core.)

In a nutshell, the growing hardware and software complexity, combined with the need to minimise energy consumption, creates a challenge which isn’t going to be resolved inside the OS. It requires an indirection layer, which is provided by a hypervisor. The hypervisor maps physical resources (physical cores) to virtual resources (logical processors seen by the guest OSes). This not only makes it easy to add or remove physical resources to particular subsystems (something OSes are notoriously bad at dealing with), but can further consolidate the complete system onto a single core, shared by multiple OSes, when demand is low.

How about heterogeneous multicores? I’ll leave this as an exercise for the reader 😉

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: