Hardware-based virtualization eases design with multicore processorsToday’s SOC (system-on-chip) processors integrate a diversity of cores, accelerators, and other processing elements. These heterogeneous multicore architectures provide increased computational capacity, but the resulting complexity also poses new challenges for embedded-system developers across a variety of applications, including control-plane processors, video servers, wireless base stations, and broadband gateways.
Discrete cores each have full access and control of their resources. Such predictable access allows straightforward management and deterministic performance in applications with real-time constraints. In a multicore architecture, however, cores share access to resources, and potential contention complicates many design factors, such as processing latency and deterministically handling interrupts.
To provide deterministic behavior equivalent to that of single-core devices, multicore architectures have begun to implement resource-sharing and management techniques that have been proved in network communications. These architectures use established queue- and traffic-management techniques to efficiently allocate resources among multiple cores, maximize throughput, minimize response latency, and avoid unnecessary congestion.
From an architectural standpoint, SOCs are complex systems with multiple cores that connect across a high-speed fabric to a variety of controllers and resources (Figure 1). In many ways, the myriad interactions within an SOC resemble a communications network with multiple sources, or cores, that interconnect to the same destinations, including memory, peripherals, and buses. Not surprisingly, bandwidth-management techniques, such as virtualization, which designers developed to improve network efficiency, have proved useful in managing traffic among multiple processor cores and shared peripherals.
Virtualization of on-chip resources enables cores to share access; this shared access is transparent to applications. Each application can treat a resource as if it were the sole owner, and a virtualization manager aggregates shared ownership—measured by the amount of allocated bandwidth. Virtualizing and sharing access to resources require both a queue manager and a traffic manager. Applications use one or more queues to buffer access to a resource. Virtualization adds events or transactions to the queue and pulls them off when the resource is available. Queues comprise a list of buffer descriptors pointing to data in a buffer, and you can implement queues in many ways, depending on the needs of the applications. The number of supported queues varies in an SOC from a few hundred to hundreds of thousands to meet the needs of various applications.
The queue manager updates the queue state—that is, the queue size, head pointer, tail pointer, and start address— and maintains fill levels and thresholds, including full, almost full, almost empty, and empty. The queue manager also provides full memory management for each queue, including allocation and deallocation of buffers from free pools and checking of access rights when an event is added to a queue (Figure 2). Multiple requesters may simultaneously add descriptors to one or more queues, as well as allow selection from multiple queues waiting for a service.
The manager serves as the arbitrator for available bandwidth among queues assigned to the same resource. It performs this task not only between applications sharing a resource but also among the multiple queues an application may have to enable QOS (quality of service).
Traffic management employs policing and shaping mechanisms to measure and control the amount of bandwidth assigned to a flow or a group of flows. Policing controls the rate at which the traffic manager adds events to a queue, and shaping is the rate at which the traffic manager removes events from the queue. For the most control and ability to manage queue priority, you must implement policing and shaping on a per-queue basis. The traffic manager also maps multiple queues to a single shared resource based on a predefined servicing algorithm.
By bringing queue and traffic management together, you can provide reliable, end-to-end QOS. This approach allows multiple paths to share a resource without negatively affecting bandwidth subscriptions. Fine-grained QOS supports SLAs (service-level agreements), guaranteeing minimum, average, and maximum bandwidth on a per-flow basis. Developers can implement queue levels for marking and metering traffic to prevent congestion. Early notification of congestion allows the queue manager to take corrective action through feedback to traffic sources to eliminate the unnecessary processing of packets that are likely to be dropped or, ideally, to avoid congestion altogether.
For example, a queue- and traffic-management-based Ethernet driver prevents any one processor from unfairly monopolizing port bandwidth. It also guarantees bandwidth allocations and maximum-latency constraints regardless of other queue states. The driver supports a choice of arbitration schemes—strict priority or weighted round robin, for example—and facilitates reliable real-time services, such as video streaming. In the end, multiple sources can share the Ethernet port without adversely affecting bandwidth subscriptions. Tasks such as IP (Internet Protocol) forwarding become straightforward to implement robustly, and latency-sensitive applications, such as audio or video delivery, benefit from deterministic and reliable port management. In addition, when you implement the queue and traffic management in hardware, the driver can maintain end-to-end QOS with little to no software overhead.
Page 1 of 3Next >