Editor’s Note: Using their company’s VisualSim Architect modeling and simulation environment, Ranjith K R and Deepak Shankar of Mirabilis Design provide a detailed simulation and analysis of an automotive multi-protocol network of over 50 nodes in order to compute the latency of the messages across the network.
Secure multi-protocol networks are the future of modern vehicle architectures where intra-vehicle communication tends to create large latencies and securing against external threats increases the processing requirements. These new types of network topologies provide higher bandwidth, but also create higher processing cost for protocol conversion and intrusion checks. This means the design must be tested against a larger number of workloads, more user scenarios, and various topologies. Multi-hop throughput is influenced by network load; characteristics of shared, non-deterministic media; inconsistent communication latency between two/more vehicle networks; and network architecture.
These new requirements and constraints must be understood prior to development, possibly during research. Worst-case execution time using analytical methods provides an extremely large range that can dramatically increase the final bill of materials. Physical prototypes of these large systems take a long time, are expensive, and can test known scenarios only. The optimal solution is one which can capture all the nuances of the system, evaluate the system behavior over time, test for failure scenarios, incorporate provisions for growth, and report the variations of latency, throughput, utilization, functional behavior and power consumption. An additional requirement is to evaluate the network behavior for distribution of software tasks across the Electronic Control Units.
In this paper, we present the results of our research and analysis of a multi-protocol network consisting of CAN, CAN-FDR, FlexRay, Ethernet, and a secure gateway. In this project, we simulated the entire system of over 50 nodes to compute the latency of the messages across the network from the sensor and software task to the next software task and attenuator. The messages were sent within a network segment, between segments, and across the gateway. We incorporated different workloads, variable number of nodes per network, different network topologies, and distribution of software tasks.
In our analysis of a secure multi-protocol communication network for automotive applications, we used a commercially available software package, VisualSim Architect from Mirabilis Design, a modeling and simulation environment with a graphical block diagram editor, simulators, and network-based libraries including hardware entities, schedulers, and software processes. When a simulation is completed, users can visualize a variety of plots and results to analyze whether the specification meets the requirements. The reports generated include network bandwidth, end-to-end delay, network node activity, routing, ECU statistics, and other user-generated reports.
We used the following modeling approach to generate a design specification correctly and to make sure that the communication architecture met all our requirements:
- Develop a block diagram of the system and top level parameters
- Run simulations to explore the design space, including different network topologies
- Analyze the simulation results using graphical plots and automated reports
- Alter the original model to improve results further, or accept simulation results
Constructing the network model
The goal of the modeling effort must be decided before embarking on a simulation project. The required decision, level of granularity and the project duration will determine the level of detail and the accuracy. In our case, the primary intent to model a secure multi-protocol vehicle communication network required that we analyze the network bandwidth performance, identify potential bottlenecks, and evaluate potential security flaw entry points. It is important to model the complete communication network to gain visibility into all aspects of the network. The system model has multiple network protocols including CAN, CAN-FDR, FlexRay, Ethernet, Ethernet switch, Gateway, TCP and UDP. The top level block diagram of the system is shown in Figure 1 .
The model has 16 CAN nodes, two fully instantiated CAN networks, 12 abstracted CAN networks, CAN to CAN bridge, gateway, 6 Ethernet switches, and three FlexRay networks. This entire network represents 50 network nodes and 14 network segments. The two CAN network segments are connected via CAN-to-CAN Bridge, while the Ethernet backbone is connected to the CAN bridge via a secure gateway. In this model, the gateway node provides the routing and processing time for the patent content inspection. We plan to add more detailed gateway operation in the next phase of this project.
Traffic for the CAN bus is defined in a standard CANdb format within VisualSim. The Ethernet backbone has traffic originating from FlexRay and CAN networks for existing body and safety systems, and radar and camera traffic from Ethernet segments. The CAN and FlexRay networks connected to the Ethernet backbone are modeled as complex statistical workloads. The bridge passes all messages to the networks that are connected, and gateway nodes route messages between CAN segments and Ethernet segments. Ethernet segments use the UDP protocol to handle all standard traffic with support for unicast, broadcast and multicast, whereas TCP is restricted to diagnostics traffic only. The Ethernet, CAN and FlexRay networks are connected in a star topology to the Ethernet backbone switch. A routing table defines the distance between nodes, communication mechanism (duplex/simplex) and communication speed.
Table 1 shows the configuration table for CAN Bus, which can be edited by user.
There is a unique traffic profile, CANdb, and sensor profile for each network segment. All CAN node messages are sent across both CAN network segments to all CAN nodes. The messages from CAN segments are sent to Ethernet through the secure gateway that enables communication between different network segments.
The internal details of the Ethernet backbone are shown in Figure 2 .
The local segments connected to the Ethernet backbone are Diagnostics, Wireless, Infotainment, Body, Chassis, Power Train and Active Safety. These devices communicate with each other in a star topology. In this model, TCP Diagnosis traffic is attached to Switch7 Ethernet and has the highest priority. All other network messages are sent across the network using multicast UDP protocol.Analysis and results
The model was developed in 80 man-hoursusing standard libraries of VisualSim Architect. The simulation was runon a 2.6 GHz Microsoft Windows 8.1 platform with 4 GB RAM, simulating500.0 msec of real time. VisualSim took 24.9 seconds of wall clock timeto finish a simulation.
Analysis was conducted for differentnetwork settings, various workload patterns, increasing and decreasingthe number of network nodes, adding new segments, and network linkfailure. The generated reports are: latency across the CAN and Ethernetnetwork segments, utilization, and network route trace. The model wasused to optimize the load allocation, trade-off latency vs. networktopology, and minimize message latencies.
The model found thatdiagnostic traffic, which is of higher priority and is made up of largerdata packets, was contributing to higher CAN to Ethernet latency, asshown in Figure 3 and Figure 4. The diagnostic is periodic and makes thelatency of all traffic to spike, thus providing unpredictable systemresponses and causing excessive message delays. A significant portion ofthe exploration was devoted to modifying the scheduling of thediagnostic traffic vs. the sensors data that had to go across theEthernet backbone.
At the current traffic rates, therescheduling allowed the latency to be more predictable. As we increasedthe diagnostic traffic, the number of latency peaks at switch 7increased significantly compared to the latency at other nodes. TheTCP_IP latency at Switch 7 shows the effect of increased diagnostictraffic.
Figure 4: Diagnostic traffic Max event burst set to 10.
Infotainmenttraffic is made up of larger packet sizes. We analyzed the impact ofrunning much higher data throughputs on the CAN to Ethernetcommunication. The latency is shown in Figure 5 and Figure 6.
Figure 6: Infotainment (video) burst period set to 2 ms
Oneadditional analysis was to look at the UDP at different switches. Wehave shown the plot for the user can analyze the latency across variousswitches to monitor and load across to decide distribution of trafficand impact of data sent from various sensors on the network. Figure 7 is a UDP_IP latency graph through various switches.
The model was setup to shut down ECUs based on the task activity and dependency logic. Table 2 is the node activity report and percentage of utilization,communication speed in Mbps and Min_Bytes, Mean_Bytes, and Max_Bytestransmitted across the link.
Inaddition to the standard outputs, we inserted probes to obtain textualreports of system activity for various system configurations. Byanalyzing these reports we could identify possible bottlenecks in thetopology and opportunities for reduced latency and lower powerconsumption. Table 3 demonstrates that increase in traffic atdifferent nodes influences the average latency more than minimum andmaximum latencies.
Wehave extended our analysis by introducing additional network segmentsto the Ethernet backbone and evaluating the influence on the overallnetwork latency. Initially we have added a segment to handle the sensorsand ECU associated with collision detection subsystem. This segmentgenerated traffic with a burst period of 300 ms on the backbone. Wenoticed that the latency curve for both TCP and UDP traffic increasedexponentially with time. This behavior is due to increased buffering atSwitch 6 and CAN1_ECU2 node. The latency graphs are shown in Figure 8.
Modelingand simulating the secure multi-protocol network helped us identifybottlenecks that were difficult to identify using physical prototypesand analytical methods. The analysis enabled us to design a betternetwork topology, thus reducing the overall bill of materials andpossibly reducing the product development schedule. A major finding wasthat increasing traffic rate did not affect the best and worst caselatency, but had a significant impact on the average latency. Thisimpact was sometimes greater than 200%. In our design, the averagelatency is the most probable latency point and it is the major designconsideration. The use of the standard libraries provided by MirabilisDesign’s VisualSim Architect accelerated our modeling and analysiseffort.
Apart from the average latency impact, we found that theperiodic rate and the number of diagnostic messages have a huge impacton the CAN-to-Ethernet latency. The average latency from the BackboneStar Node to Segment switches is utilized at approximately 20%, whilethe Network Segments to the Star Node are under-utilized. This meansthat most of the network packets tend to be local, and connectingadditional network segments to the backbone will not increase theoverall end-to-end latency. This also indicates that the processingoverhead for the secure gateway will not be a performance detriment.
Ranjith K R is an EDA Application engineer, specializing in VisualSim system-level products at Mirabilis Design Inc. ,Bengaluru, India. He has many years of expertise in system-levelmodeling, simulation and development. Mr. Ranjith has been involved invarious system-level model development projects with the defense sector,aerospace corporations and multinational semiconductor companies inIndia. He has completed an MS in Electronics from Kuvempu University anda Diploma in FPGA design and verification.
Deepak Shankar is the founder and CEO of Mirabilis Design .Prior to Mirabilis Design, Shankar was VP of Business Development atMemCall, a fabless semiconductor company, and SpinCircuit, a supplychain joint venture of HP, Cadence and Flextronics. He has been aspeaker at many IEEE conferences and industry events. Deepak Shankar haspublished a number of papers in the field of performance analysis andelectronic product architecture. He spent many years in productmarketing and application engineering at Cadence Design Systems. He hasan MBA from the Haas School of Business University of CaliforniaBerkeley, an MS in Electrical Engineering from Clemson University and aBS in Electronics and Communication from Coimbatore Institute ofTechnology.