The convergence of application domains in new systems-on-chip (SoC) results in complex systems with many use cases, comprised of concurrently executing applications with real-time requirements. To reduce cost, the applications share resources, such as interconnect and memories, resulting in interference that makes it difficult to satisfy the real-time requirements. Currently, real-time requirements are verified by slow system-level simulation, as most hardware is not designed with formal analysis in mind. This approach, however, suffers from poor coverage, since the amount of use cases increase exponentially with the number of applications. This problem contributes to making the verification and integration process a dominant part of SoC development, both in terms of time and money.
Predictable and composable systems are proposed to manage the increasing verification complexity of SoCs. A system is considered predictable if it is possible to bound the temporal behavior of the applications using it, by analytically accounting for the possible interference. This enables formal verification of real-time requirements, providing full coverage of all possible state transitions and initial states. Composable systems, on the other hand, completely isolate applications from each other in the temporal domain. A composable system hence allows applications to be independently developed and verified, resulting in a linear verification process. A predictable and composable system built from predictable and composable components. Such components have been proposed in literature, but no satisfactory solutions have been presented for important resources, such as memories. This work addresses this issue by proposing a predictable and composable memory controller architecture, shown in Fig. 1. The architecture consists of a general resource front-end that can be used with most memory interfaces, such as SRAMs, and an SDRAM-specific back-end. The latter is required, since the latency and bandwidth offered by an SDRAM is highly variable and traffic dependent. Our proposed SDRAM back-end solves this problem by using predictable memory access patterns . These are pre-computed sequences of SDRAM commands that are dynamically scheduled at run-time in a way that allows offered bandwidth and latency to be bounded at design time.
Figure 1: An instance of a predictable and composable SDRAM controller, supporting two applications.
To simplify verification, we provide an abstraction of temporal behavior that makes the choice of memory and arbiter type transparent. The abstraction is based on the theory of latency-rate (LR) servers . In essence, a LR server guarantees an application a minimum allocated bandwidth, rho', within a maximum latency, Theta. This provides a lower bound on the provided service, as shown in Fig. 2, making it an abstraction of predictable service. The benefit of this abstraction is that it allows a single parameterized performance model to formally verify all combinations of supported memory types and arbiters. As a contribution of this work, we introduce a Credit-Controlled Static-Priority arbiter [3, 4] that belongs to the class of LR servers. This arbiter decouples latency and bandwidth and has a small and fast hardware implementation. It furthermore allocates service with negligible over-allocation, which is essential for scarce SoC resources, such as memories.
We make our memory controller composable though a novel approach to composable resource sharing. The idea is to emulate maximum interference from other applications sharing the memory. This is accomplished by a Delay Block in the resource front-end that forces the predictable provided service to be equal to the minimum guaranteed service, as illustrated in Fig. 2. This removes the variation in interference caused by other applications, making them independent of each other's actual behavior . The choice between predictable and composable service is done per application, and can be dynamically reconfigured in the Delay Block through a Configuration Bus.
Lastly, we introduce an automated methodology for memory controller configuration that synthesizes the memory access patterns, arbiter settings, and buffer sizes, given a specification of the target memory device, and the application requirements.
1. B. Akesson , K. Goossens, and M. Ringhofer, "Predator: a predictable SDRAM memory controller", in Proc. CODES+ISSS, 2007.
2. D. Stiliadis and A. Varma, "Latency-rate servers: a general model for analysis of traffic scheduling algorithms", IEEE/ACM Trans. Netw., vol. 6, no. 5, 1998.
3. B. Akesson, L. Steffens, E. Strooisma, and K. Goossens, "Real-Time Scheduling Using Credit-Controlled Static-Priority Arbitration", in Proc. RTCSA, Aug. 2008.
4. B. Akesson, L. Steffens, and K. Goossens, "Efficient Service Allocation in Hardware Using Credit-Controlled Static-Priority Arbitration", in Proc. RTCSA, Aug. 2009.
5. B. Akesson , A. Hansson, and K. Goossens, ``Composable resource sharing based on latency-rate servers,'' in Proc. DSD, Aug. 2009.