Automatic Derivation of Application-Aware Error and Attack Detectors


  • Karthik Pattabiraman
  • William Healey
  • Peter Klemperer
  • Paul Dabrowski
  • Shelley Chen
  • Zbigniew Kalbarczyk
  • Ravi K. Iyer


Traditionally, system security has meant access control and cryptography support, but the Internet’s phenomenal growth has led to the large-scale adoption of networked computer systems for a diverse cross-section of applications with highly varying requirements. In this all-pervasive computing environment, the need for security and reliability has expanded from a few expensive systems to a basic computing necessity. This new paradigm has important consequences:

  • Networked systems stretch the boundary of fault models, from an application or node failure to failures that could propagate and affect other components, subsystems, and systems.
  • Attackers can exploit vulnerabilities in operating systems and applications with relative ease.
  • As computing systems become more ubiquitous, security and reliability techniques must be cheaper and more focused on application characteristics.

Users want their applications to continue to operate without interruption, despite attacks and failures, but as applications become more complex and diverse, this task gets harder. The traditional one-size fits all approach to reliability and security is too expensive and unacceptable from an end-user’s perspective. In contrast, application-aware approaches protect applications at the source, making it cheaper and more efficient to provide specialized checks that are geared towards application characteristics, and can be made provably correct. Further, since hardware is becoming cheaper, it is desirable for the underlying hardware to configure itself to provide the best application support.

Hardware-based techniques have the following advantages over software-only techniques:

  1. low performance overhead because the hardware can perform checking in parallel with the application’s execution
  2. low levels of detection latency as the checks can detect errors closer to their points of occurrence
  3. they can ensure that the underlying hardware itself is not compromised.

Our goal is to provide an automatic framework to analyze applications and extract their reliability and security properties. These properties can then be converted into runtime checks and programmed directly into the hardware. By leveraging application properties in hardware, our checks will selectively detect errors and attacks that matter to the application with low performance overheads.


Our hardware-based technique uses knowledge of an application’s execution characteristics to devise application-specific detectors and assertions for low-latency data-corruption detection. Figure 1 illustrates a framework for automated (or semi-automated) derivation of security and reliability checks. The framework in Figure 1 uses compiler-based static analysis to uncover relationships or invariants that hold in the original program, so that the hardware can check them during runtime to detect security and reliability violations. The first step in static analysis is to identify critical variables and locations in the program, which, if corrupted, can lead to failures or security breaches with a high probability. For reliability, the compiler identifies critical variables based on heuristics applied to the program’s dynamic dependence graph. For security, programmers use their knowledge of application semantics to identify critical variables—for example, the variable that holds the system password for authentication.
Figure 1: Steps in the derivation of error and attack detectors


Derivation of Error Detectors
The derivation of error detectors involves identification of program variables that are sensitive to random data errors in the program and selectively protecting the computation of these variables from errors. The identification of sensitive variables is carried out by building the dynamic dependence graph of the program and finding variables with high fanouts in the dependence graph. This is because errors in variables with high fanouts are more likely to propagate in the program and cause program failure. By placing error detectors at these high-fanout variables, the propagation of errors can be arrested and preemptive recovery can be initiated.

Once the sensitive variables have been identified, the compiler computes the backward program slice of each variable for each control path in the program. The slice includes only those instructions that compute the value of the sensitive variable along a specific control path. As a result, it can be optimized much more aggressively than the rest of the program to yield a minimal symbolic expression called the checking expression. The compiler adds instrumentation to track the control path executed at runtime and choose the checking expression corresponding to the executed path.

The compiler analysis has been implemented as a series of passes in the LLVM optimizing compiler developed at the University of Illinois at Urbana-Champaign. Runtime support for path tracking is implemented in hardware and check execution is implemented in software. Coverage measurements (performed using fault injections) indicate that the derived checking expressions provide about 77% coverage from any data error in the program. The check execution incurs an average performance overhead of 33% (in software) across a wide range of applications, which compares favorably to full-duplication-based approaches, which incur performance overheads of 60–100%.

The application-aware error detectors are implemented as a module in the reliability and security engine (RSE), called the static detector module (SDM). The RSE provides access to the pipeline of the FPGA synthesized soft-core microprocessor; in this case, a superscalar-DLX, which is a MIPS variant. The hardware platform includes a Nallatech BenONE PCI FPGA card that hosts Xilinx Virtex-2 Pro FPGAs. The designs are synthesized using Xilinx ISE 7.1 synthesis toolflow and debugged using Xilinx Chipscope Pro logic-analysis software. The PCI platform provides communication between the host processor and the DLX with RSE coprocessor.

The static detector module (SDM) provides facilities for tracking the control flow of a program at runtime and invoking specific hardware-defined checks according to the current program state. We have implemented the path tracking for a simple Bubblesort program in the SDM. For this program, we found that the area overhead incurred was 2% and the performance overhead was 12%. We are working on implementing larger programs with the SDM and measuring their overheads.

Derivation of Attack Detectors
The derivation of attack detectors is based on the observation that attackers typically exploit the gap between the source-level semantics of a C/C++ program and its execution in order to subvert the values of certain security-critical variables in the program. An example of a security-critical variable is the system password in a Secure Shell (SSH) program, which may be overwritten by the attacker in order to gain unauthorized entry into the system. Our technique enforces the source-level information-flow properties of critical variables at runtime, thereby guaranteeing the integrity of critical variables from memory corruption attacks (e.g., buffer overflows) and hardware-based attacks (e.g., smart-card based attacks).

The information-flow properties are extracted by computing the backward slices of the critical variables (using compiler-analysis techniques) and converted into dependency sequences that are encoded in the form of a signature. These signatures are then checked at runtime using a combination of programmable hardware and software to ensure that the runtime sequence of writes to critical variables matches the compiler-derived signature. A mismatch indicates an attack and the user can be alerted. A key advantage of the technique is that the integrity of critical variables is preserved even when other parts of the program are compromised by the attacker. This is crucial for achieving fast recovery from the attack.

We have implemented the detector derivation technique in the IMPACT compiler, developed at the University of Illinois. We have also built a prototype hardware implementation as a module in the RSE. The hardware module provides low-latency tracking of the information-flow signature in parallel with the execution instructions in the main processor. When a critical variable is written, the signature accumulated in hardware is compared with the compiler-derived signature in software. In case of a mismatch, an exception is raised and the program is stopped.

For building the hardware prototype, we are using the Gaisler Research Leon3 open-source processor augmented with the RSE. The hardware design has been synthesized for a Xilinx Virtex II Pro 30 FPGA using Simplify Pro v 8.1, with place and route implemented by Xilinx ISE 9.1. The Leon-3 processor includes 16-KB data and instruction caches and MUL/DIV units.

We have evaluated this technique using popular open-source applications such as OpenSSH, WuFTP and NullHttpd. Our initial measurements indicate a constant runtime performance overhead of 8% or less for the vast majority of the applications and critical variables considered. The software overhead of checking is negligible when considering the total application execution time. The performance overhead is dominated by the increase in clock cycle time of the processor as a result of interfacing with the RSE and the hardware module.

The hardware area overhead is 30.5% for a field-programmable gate array (FPGA) implementation, and about 7.5% for an equivalent application-specific integrated circuit (ASIC) implementation. Work is underway to improve the area overheads for the FPGA implementation, as well as to implement the signature checking scheme on larger programs.

For more information, please read the following paper: “Towards Application-Aware Security and Reliability
For more information about the HW Framework that we build the detectors on, please read about our other project, the Reliability and Security Engine.
Slideset that was presented at the latest GSRC annual review (2007): “Automated Derivation of Application-Aware Error and Attack Detectors.”