Difference between revisions of "TACL Seminar Spring '05"
m (→Bolei Guo)
m (→Neal Glew)
|Line 83:||Line 83:|
Revision as of 15:32, 15 August 2006
|Feb 9||Dan Wang||Opportunities and Challenges in End-to-End Verification of Software Systems||Abstract|
|Feb 10||Dinghao Wu||Interfacing Compilers, Proof Checkers, and Proofs for Foundational Proof-Carrying Code||Abstract|
|Feb 16||Gang Tan||Structured Verification of Unstructured Machine Code||Abstract|
|Feb 23||NJ PLS|
|Mar 2||Xinming Ou||MulVAL: An Automatic Network Vulnerability Analyzer||Abstract|
|Mar 9||George Reis||SWIFT: Software Implemented Fault Tolerance||Abstract|
|Mar 16||Bolei Guo||Low-Level Pointer Analysis||Abstract|
|Mar 23||Dominic Duggan - Stevens|
|Mar 28||Neal Glew - Intel Research||Inlining, Dynamic Loading and Type Safety||Abstract|
|Apr 6||Neil Vachharajani||Opportunities for Coarse-Grained Parallel Execution||Abstract|
|Apr 13||Yitzhak Mandelbaum||A Calculus for Specifying Ad Hoc Data Formats||Abstract|
|Apr 22||IBM PL Day|
|Apr 27||Rob Simmons||Metatheoretic Approaches to Formalizing High-level Languages||Abstract|
|May 25||Aarti Gupta - NEC||Verifying C Programs by Model Checking||Abstract|
Opportunities and Challenges in End-to-End Verification of Software Systems
I'll start out by taking a brief tour that covers my past experience on various projects and how those experiences shape my current view of the importance and practicality of end-to-end system verification. My basic thesis is that end-to-end system verification is theoretically/technically possible, but often *economically* impractical. More importantly the marketplace for verified software systems is likely to significantly expand within this century.
I'll will also discuss on going research in developing simpler and flexible semantic techniques for the verification of Foundational Proof-Carrying code systems. In particular, I will discuss how to build proofs of type soundness using higher-order abstract encodings of syntax without the need for meta-logical approaches to soundness.
Interfacing Compilers, Proof Checkers, and Proofs for Foundational Proof-Carrying Code
Proof-Carrying Code (PCC) is a general framework for the mechanical verification of safety properties of machine-language programs. It allows a code producer to provide an executable program to a code consumer, along with a machine-checkable proof of safety such that the code consumer can check the proof before running the program. A weakness of previous PCC systems is that the proof-checking infrastructure is based on some complicated logic that is not necessarily sound.
Foundational Proof-Carrying Code (FPCC) aims to build the safety proof based on the simple and trustworthy foundations of mathematical logic. There are three major components in an FPCC system: a compiler, a proof checker, and the safety proof of an input machine-language program. The compiler should produce machine code accompanied by a proof of safety. The proof checker verifies, sometimes also discovers, the safety proof before the program gets executed. We have built an end-to-end FPCC system. Our prototype system compiles Core ML programs to SPARC code, accompanied with programs in a low-level typed assembly language; these typed assembly programs serve as the proof witnesses of safety of the corresponding SPARC machine code. Our Flit proof checker includes a simple logic programming engine enabling efficient proof-checking.
In this talk, I'll present the design of interfaces between these components and show how to build an end-to-end FPCC system. We have come to the conclusion that: (1) a type system (a low-level typed assembly language) should be designed to check machine code; (2) the proof-checking should be factored into two stages, namely typechecking of the input machine code and verification of soundness of the type system. Since a type checker can be efficiently interpreted as a logic program, Flit's logic programming engine enables efficient proof-checking.
I'll also briefly present my research on software model checking and program verification.
Structured Verification of Unstructured Machine Code
Safety properties of machine code are of growing interest in both academia and industry during recent years. Traditional Hoare Logic has been tailored to verify properties of high-level languages with structured control flow, but is not suitable for verifying properties of machine code, because machine code contains goto statements with unrestricted destinations (i.e., unstructured control flow). A mathematical framework that modularly verifies properties of machine code is necessary for projects such as Foundational Proof-Carrying Code (FPCC), which produces type-safety proofs for machine code.
We present a program logic, L_c, which modularly reasons about machine-code fragments. Unlike previous program logics, the basic reasoning units in L_c are multiple-entry and multiple-exit code fragments. L_c provides composition rules to combine code fragments together and to eliminate intermediate entries/exits in the combined fragment. L_c is not only useful for reasoning about properties of machine code with unstructured control flow, but also useful for deriving rules for common control-flow structures such as while-loops, repeat-until-loops, among others. We also present a semantics for L_c and prove that L_c is both sound and complete (with reasonable assumptions). As an application to FPCC, L_c has been developed on top of the SPARC machine language: The semantics of L_c is encoded in higher-order logic; its rules are proved as lemmas with machine-checked proofs; typing rules for machine instructions in FPCC are then derived from rules in L_c. Thus with high confidence, we can check that the output of our certifying compiler is safe.
MulVAL: An Automatic Network Vulnerability Analyzer
To determine the security impact software vulnerabilities have on a particular network, one must consider interactions among multiple components of the operating systems and multiple hosts. For a vulnerability analysis tool to be useful in practice, two problems must be addressed. First, the tool must automatically integrate formal vulnerability specifications, which come from bug-reporting communities, into the reasoning process. Second, the reasoning system should be able to scale to networks with thousands of hosts.
In this talk I will show how we achieved these two goals in MulVAL, an end-to-end framework and reasoning system that conducts multihost, multistage vulnerability analysis on a network. MulVAL adopts Datalog as the modeling language for the elements in the analysis (bug specification, configuration description, operating- system permission and privilege model, etc.). We leverage existing vulnerability-database and scanning tools by converting their output into Datalog tuples and feeding them into the MulVAL reasoning engine, which consists of Datalog rules that capture common attack scenarios and security-relevant operating system semantics. Once all necessary information is collected, the analysis can be performed in seconds for networks with thousands of machines.
SWIFT: Software Implemented Fault Tolerance
To improve performance and reduce power consumption, processor designers employ advances that shrink feature sizes, lower voltage levels, reduce noise margins, and increase clock rates. These advances, however, also make processors more susceptible to transient faults that can affect program correctness. To mitigate this increasing problem, designers build redundancy into systems to the degree that the soft-error budget will allow.
While reliable systems typically employ hardware techniques to address soft-errors, software techniques can provide a lower cost and more flexible alternative. To make this alternative more attractive, this paper presents a new software fault tolerance technique, called SWIFT, for detecting transient errors. Like other single-threaded software fault tolerance techniques, SWIFT efficiently manages redundancy by reclaiming unused instruction-level resources present during the execution of most programs. SWIFT, however, eliminates the need to double the memory requirement by acknowledging the use of ECC in caches and memory. SWIFT also provides a higher level of protection with enhanced checking of the program counter (PC) at no performance cost. In addition, this enhanced PC checking makes most code inserted to detect faults in prior methods unnecessary, significantly enhancing performance.
Low-Level Pointer Analysis
Pointer analysis is traditionally performed once, early in the compilation process, upon an intermediate representation (IR) with source-code semantics. However, performing pointer analysis only once at this level imposes a phase-ordering constraint, causing alias information to become stale after subsequent code transformations. Moreover, high-level pointer analysis cannot be used at link time or run time, where the source code is unavailable.
This work advacates performing pointer analysis on a low-level intermediate representation. We present the first context-sensitive and partially flow-sensitive points-to analysis designed to operate at the assembly level. As we will demonstrate, low-level pointer analysis can be as accurate as high-level analysis. Additionally, our low-level pointer analysis also enables a quantitative comparison of propagating high-level pointer analysis results through subsequent code transformations, versus recomputing them at the low level. We show that, for C programs, the former practice is considerably less accurate than the latter.
Inlining, Dynamic Loading and Type Safety
Method inlining is an optimisation that can be invalidated by later class loading. A program analysis based on the current loaded classes might determine that a method call has a unique target, but later class loading could add targets. If a compiler speculatively inlines methods based on current information, then it will have to undo the inlining when later classes invalidate the assumptions. The problem with undoing inlining is that it might be executing at the time of undo and therefore require a complicated, on-the-fly update of the program state. Previous work presented techniques for dealing with invalidation including on-stack replacement, preexistence analysis, extant analysis, and code patching. Until now, it has been an open question whether such operations can be done in a type-safe manner, and no formal proof of correctness exists in the literature.
In this paper we present a framework for reasoning about method inlining, dynamic class loading, and type soundness. Our example language has both a nonoptimising and an optimising semantics. At the point of dynamically loading a class, the optimising semantics does a whole-program analysis, inlines calls in the newly loaded code, and patches the invalidated inlinings in all previously loaded code. The patching is based on a new construct that models in-place update of machine code---a technique used by some virtual machines to do speculative method inlining. The two semantics are equivalent and both have a type soundness property---proving correctness and type safety of the optimisation. Our framework can form the basis of virtual machines that represent optimised code in a typed low-level language.