TACL Seminar Spring '05

From CSWiki

Jump to: navigation, search

TACL Seminar

Contents

Spring '05 Schedule

Date Speaker Title Links
Feb 9 Dan Wang Opportunities and Challenges in End-to-End Verification of Software Systems Abstract
Feb 10 Dinghao Wu Interfacing Compilers, Proof Checkers, and Proofs for Foundational Proof-Carrying Code Abstract / Paper
Feb 16 Gang Tan Structured Verification of Unstructured Machine Code Abstract
Feb 23 New Jersey Programming Languages and Systems Seminar (NJPLS)
Mar 2 Xinming Ou MulVAL: An Automatic Network Vulnerability Analyzer Abstract / Paper
Mar 9 George Reis SWIFT: Software Implemented Fault Tolerance Abstract / Paper
Mar 16 Bolei Guo Low-Level Pointer Analysis Abstract / Paper
Mar 23 Dominic Duggan (Stevens)
Mar 28 Neal Glew (Intel Research) Inlining, Dynamic Loading and Type Safety Abstract
Apr 6 Neil Vachharajani Opportunities for Coarse-Grained Parallel Execution Abstract
Apr 13 Yitzhak Mandelbaum A Calculus for Specifying Ad Hoc Data Formats Abstract
Apr 22 IBM Research Programming Languages Day
Apr 27 Rob Simmons Metatheoretic Approaches to Formalizing High-level Languages Abstract
May 25 Aarti Gupta (NEC) Verifying C Programs by Model Checking Abstract

Abstracts

Dan Wang

Opportunities and Challenges in End-to-End Verification of Software Systems

I'll start out by taking a brief tour that covers my past experience on various projects and how those experiences shape my current view of the importance and practicality of end-to-end system verification. My basic thesis is that end-to-end system verification is theoretically/technically possible, but often *economically* impractical. More importantly the marketplace for verified software systems is likely to significantly expand within this century.

I'll will also discuss on going research in developing simpler and flexible semantic techniques for the verification of Foundational Proof-Carrying code systems. In particular, I will discuss how to build proofs of type soundness using higher-order abstract encodings of syntax without the need for meta-logical approaches to soundness.

Dinghao Wu

Interfacing Compilers, Proof Checkers, and Proofs for Foundational Proof-Carrying Code

Proof-Carrying Code (PCC) is a general framework for the mechanical verification of safety properties of machine-language programs. It allows a code producer to provide an executable program to a code consumer, along with a machine-checkable proof of safety such that the code consumer can check the proof before running the program. A weakness of previous PCC systems is that the proof-checking infrastructure is based on some complicated logic that is not necessarily sound.

Foundational Proof-Carrying Code (FPCC) aims to build the safety proof based on the simple and trustworthy foundations of mathematical logic. There are three major components in an FPCC system: a compiler, a proof checker, and the safety proof of an input machine-language program. The compiler should produce machine code accompanied by a proof of safety. The proof checker verifies, sometimes also discovers, the safety proof before the program gets executed. We have built an end-to-end FPCC system. Our prototype system compiles Core ML programs to SPARC code, accompanied with programs in a low-level typed assembly language; these typed assembly programs serve as the proof witnesses of safety of the corresponding SPARC machine code. Our Flit proof checker includes a simple logic programming engine enabling efficient proof-checking.

In this talk, I'll present the design of interfaces between these components and show how to build an end-to-end FPCC system. We have come to the conclusion that: (1) a type system (a low-level typed assembly language) should be designed to check machine code; (2) the proof-checking should be factored into two stages, namely typechecking of the input machine code and verification of soundness of the type system. Since a type checker can be efficiently interpreted as a logic program, Flit's logic programming engine enables efficient proof-checking.

I'll also briefly present my research on software model checking and program verification.

PhD Thesis

Gang Tan

Structured Verification of Unstructured Machine Code

Safety properties of machine code are of growing interest in both academia and industry during recent years. Traditional Hoare Logic has been tailored to verify properties of high-level languages with structured control flow, but is not suitable for verifying properties of machine code, because machine code contains goto statements with unrestricted destinations (i.e., unstructured control flow). A mathematical framework that modularly verifies properties of machine code is necessary for projects such as Foundational Proof-Carrying Code (FPCC), which produces type-safety proofs for machine code.

We present a program logic, L_c, which modularly reasons about machine-code fragments. Unlike previous program logics, the basic reasoning units in L_c are multiple-entry and multiple-exit code fragments. L_c provides composition rules to combine code fragments together and to eliminate intermediate entries/exits in the combined fragment. L_c is not only useful for reasoning about properties of machine code with unstructured control flow, but also useful for deriving rules for common control-flow structures such as while-loops, repeat-until-loops, among others. We also present a semantics for L_c and prove that L_c is both sound and complete (with reasonable assumptions). As an application to FPCC, L_c has been developed on top of the SPARC machine language: The semantics of L_c is encoded in higher-order logic; its rules are proved as lemmas with machine-checked proofs; typing rules for machine instructions in FPCC are then derived from rules in L_c. Thus with high confidence, we can check that the output of our certifying compiler is safe.

Xinming Ou

MulVAL: An Automatic Network Vulnerability Analyzer

To determine the security impact software vulnerabilities have on a particular network, one must consider interactions among multiple components of the operating systems and multiple hosts. For a vulnerability analysis tool to be useful in practice, two problems must be addressed. First, the tool must automatically integrate formal vulnerability specifications, which come from bug-reporting communities, into the reasoning process. Second, the reasoning system should be able to scale to networks with thousands of hosts.

In this talk I will show how we achieved these two goals in MulVAL, an end-to-end framework and reasoning system that conducts multihost, multistage vulnerability analysis on a network. MulVAL adopts Datalog as the modeling language for the elements in the analysis (bug specification, configuration description, operating- system permission and privilege model, etc.). We leverage existing vulnerability-database and scanning tools by converting their output into Datalog tuples and feeding them into the MulVAL reasoning engine, which consists of Datalog rules that capture common attack scenarios and security-relevant operating system semantics. Once all necessary information is collected, the analysis can be performed in seconds for networks with thousands of machines.

USENIX Security Paper

George Reis

SWIFT: Software Implemented Fault Tolerance

To improve performance and reduce power consumption, processor designers employ advances that shrink feature sizes, lower voltage levels, reduce noise margins, and increase clock rates. These advances, however, also make processors more susceptible to transient faults that can affect program correctness. To mitigate this increasing problem, designers build redundancy into systems to the degree that the soft-error budget will allow.

While reliable systems typically employ hardware techniques to address soft-errors, software techniques can provide a lower cost and more flexible alternative. To make this alternative more attractive, this paper presents a new software fault tolerance technique, called SWIFT, for detecting transient errors. Like other single-threaded software fault tolerance techniques, SWIFT efficiently manages redundancy by reclaiming unused instruction-level resources present during the execution of most programs. SWIFT, however, eliminates the need to double the memory requirement by acknowledging the use of ECC in caches and memory. SWIFT also provides a higher level of protection with enhanced checking of the program counter (PC) at no performance cost. In addition, this enhanced PC checking makes most code inserted to detect faults in prior methods unnecessary, significantly enhancing performance.

CGO '05 Paper

Bolei Guo

Low-Level Pointer Analysis

Pointer analysis is traditionally performed once, early in the compilation process, upon an intermediate representation (IR) with source-code semantics. However, performing pointer analysis only once at this level imposes a phase-ordering constraint, causing alias information to become stale after subsequent code transformations. Moreover, high-level pointer analysis cannot be used at link time or run time, where the source code is unavailable.

This work advacates performing pointer analysis on a low-level intermediate representation. We present the first context-sensitive and partially flow-sensitive points-to analysis designed to operate at the assembly level. As we will demonstrate, low-level pointer analysis can be as accurate as high-level analysis. Additionally, our low-level pointer analysis also enables a quantitative comparison of propagating high-level pointer analysis results through subsequent code transformations, versus recomputing them at the low level. We show that, for C programs, the former practice is considerably less accurate than the latter.

CGO '05 Paper

Neal Glew

Inlining, Dynamic Loading and Type Safety

Method inlining is an optimisation that can be invalidated by later class loading. A program analysis based on the current loaded classes might determine that a method call has a unique target, but later class loading could add targets. If a compiler speculatively inlines methods based on current information, then it will have to undo the inlining when later classes invalidate the assumptions. The problem with undoing inlining is that it might be executing at the time of undo and therefore require a complicated, on-the-fly update of the program state. Previous work presented techniques for dealing with invalidation including on-stack replacement, preexistence analysis, extant analysis, and code patching. Until now, it has been an open question whether such operations can be done in a type-safe manner, and no formal proof of correctness exists in the literature.

In this paper we present a framework for reasoning about method inlining, dynamic class loading, and type soundness. Our example language has both a nonoptimising and an optimising semantics. At the point of dynamically loading a class, the optimising semantics does a whole-program analysis, inlines calls in the newly loaded code, and patches the invalidated inlinings in all previously loaded code. The patching is based on a new construct that models in-place update of machine code---a technique used by some virtual machines to do speculative method inlining. The two semantics are equivalent and both have a type soundness property---proving correctness and type safety of the optimisation. Our framework can form the basis of virtual machines that represent optimised code in a typed low-level language.

Neil Vachharajani

Opportunities for Coarse-Grained Parallel Execution

Advances in semiconductor technology provide an ever increasing number of transistors, but also pose several challenges: wire delay is increasing relative to die size; reliability, power dissipation, temperature, and current swings (dI/dt) are increasingly difficult to regulate; and design complexity is quickly becoming unmanageable. To overcome these challenges while sustaining the increase in processing power to which we have grown accustomed, designers are integrating multiple cores onto a single chip rather than adding complexity to each core. Unfortunately, providing multiple cores does not directly translate into performance, increased reliability, reduced temperature, or more stable current draws since most codes are single-threaded.

While multi-threaded applications do exist, writing single-threaded programs is still the predominant methodology. Unfortunately, research in automatic program parallelization (both static and dynamic techniques) has met with only limited (often domain specific) success. In this talk we explore the opportunities available to compilers and architectures for program parallelization. Using methodology similar to ILP limit studies of the past, we attempt to identify road blocks for extracting coarse-grained parallelism amenable for execution on chip multiprocessors. This talk will present preliminary results showing "where parallelism comes from" and identifying some key instruction dependences limiting available parallelism.

Yitzhak Mandelbaum

A Calculus for Specifying Ad Hoc Data Formats

An ad hoc data format is any nonstandard data format for which parsing, querying, analysis or transformation tools are not readily available. Vast amounts of useful data are stored and processed in ad hoc formats. Traditional databases and XML systems provide rich infrastructure for processing well-behaved data, but are of little help when dealing with ad hoc formats. Before anything can be done with an ad hoc data source, someone has to produce a suitable parser for it. Today, people tend to use C or PERL for this task. Unfortunately, writing such parsers is tedious and error-prone. It is complicated by the lack of documentation, convoluted encodings designed to save space, the need to produce efficient code to cope with the scale of the data, and the need to handle errors robustly to avoid corrupting precious data.

As an alternative, a number of languages have been designed to enable users to describe data declaratively in terms of types and constraints, including PADS, DataScript and PacketTypes. While these languages differ significantly in their application, their essential semantics are quite similar. Unfortunately, no formal specification of the semantics of any of these languages exists. Such semantics would be very useful in understanding the existing implementations, as well as in developing new bindings and alternative implementations.

To address this shortcoming, we have developed a calculus of dependent types for describing the physical layout and semantic properties of a data source. It consists of a small number of orthogonal type constructors that can be composed with one another to describe rich and varied data formats. The calculus captures many of the commonalities of the aforementioned data description languages. We present a semantics for the calculus in which we interpret the types simultaneously as the simple types of the data described and as parsing functions for a data stream. The parsing functions are robust, automatically detecting and recording errors in the data stream without halting parsing. We prove two important correctness properties of the parsers. First, we show they are type-correct, outputing data whose type matches the simple-type interpretation of the specification, and second, we show they are "error-correct," reporting exactly the number of physical and semantic errors that occur in the returned data. Finally, we present a case study of using this calculus to describe the PADS language.

Rob Simmons

Metatheoretic Approaches to Formalizing High-level Languages

One of the newer developments in the field of programming language verification is the metatheory of the dependently typed logical framework LF as implemented in Twelf. These methods currently lack the strong trust guarantees which are important for applications such as proof-carrying code, but the tradeoff for trust is a lower learning curve, increased speed of specification, and computer-verified proofs that are compact and readable by most people with an understanding of programming language theory and proof by induction.

This talk will have two parts. The first will be an informal and practical introduction to metatheory in Twelf. The second part will be a short demonstration on developing metatheoretic proofs.

Aarti Gupta

Verifying C Programs by Model Checking

Model checking is an automatic technique for verification, and has been used successfully in practice for verifying complex hardware designs and communication protocols. More recently, there has been a growing interest in applying model checking techniques for verifying software programs. In particular, ideas related to predicate abstraction, counterexample-guided abstraction refinement, and SAT-based bounded model checking have been shown effective in verifying software instances including device drivers and embedded software applications. In this talk, I will describe some of these techniques and their application for verification of C programs. Our approach is based on modeling and abstracting the underlying control flow graph of a program as a finite state model, performing suitable static analyses including program slicing and range analysis, and verifying the resulting model by using various SAT-based and BDD-based techniques. I will also describe some practical experiences with our prototype software verification platform called F-Soft.


TACL Seminar

Personal tools