Slides and video recordings from the speakers' presentations are available via our programme page.
Slides and video recordings from the speakers' presentations are available via our programme page.
"Abstraction and Reference" for Digital Vellum
Preservation of digital information is a challenge. Complex digital objects are created using a wide range of applications. These run on various operating systems and rely on a variety of infrastructure elements for their operation. To correctly render these complex objects for hundreds to thousands of years into the future, one may need to preserve the source code, executable code, operating systems and even simulators of various hardware components. This paper is a speculative exploration of ideas to accomplish the objective, using simple concepts that may be sufficient to guide a robust and long-lived digital library design.
About the Speaker. Vinton G. Cerf is president of Association for Computing Machinery, vice president and Chief Internet Evangelist for Google, and a member of National Science Board. Widely known as one of the "Fathers of the Internet," Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. He has served in executive positions at MCI, the Corporation for National Research Initiatives and the Defense Advanced Research Projects Agency. He has a Bachelor of Science in Mathematics from Stanford University and a Master of Science and Ph.D. in Computer Science from the University of California, Los Angeles.
When Less is More: Designing Predictable and Robust Systems
Striving for a minimalist design helps to ensure predictability and robustness. A minimalist design eliminates the unnecessary and only keeps the necessary elements thereby helping the designer to predict and reason about behavior and, moreover, behavior under changing conditions. Here, predictability refers to guaranteed performance characteristics such as throughput and latency, and robustness refers to predictable behavior under a wide range of operating parameters. Looking at my recent projects I will illustrate how the minimalist design principle has been guiding me in my work on network switches and network processors.
About the Speaker. Hans Eberle is a Senior Consulting Hardware Engineer at Oracle Labs currently working on application offloading on network interfaces. Prior to joining Sun/Oracle he was an Assistant Professor at ETH Zurich and a Principal Engineer at the Systems Research Center of the Digital Equipment Corporation in Palo Alto. At Sun/Oracle, he has been working on on-chip interconnects, IO and network switches and interfaces, wireless system monitoring, and cryptographic hardware accelerators. He holds a Dr. sc. techn. degree in Computer Science and Dipl. Ing. degree in Electrical Engineering from ETH Zurich. He further has an MBA degree in Sustainable Management from the Presidio School of Management in San Francisco.
University of California, Irvine, USA
The Multicompiler: Software Defenses Using Compiler Techniques
We have been investigating compiler-generated software diversity as a defense mechanism against cyber attacks. This approach is in many ways similar to biodiversity in nature.
Imagine an “App Store” containing a diversification engine (a “multicompiler”) that automatically generates a unique version of every program for every user. All the different versions of the same program behave in exactly the same way from the perspective of the end-user, but they implement their functionality in subtly different ways. As a result, any specific attack will succeed only on a small fraction of targets and a large number of different attack vectors would be needed to take over a significant percentage of them.
Because an attacker has no way of knowing a priori which specific attack will succeed on which specific target, this method also very significantly increases the cost of attacks directed at specific targets.
We have built such a multicompiler which is now available as a prototype. We can diversify large software distributions such as the Firefox and Chromium web browsers or a complete Linux distribution. I will present some preliminary benchmarks and will also address some practical issues such as the problem of reporting errors when every binary is unique, and updating of diversified software.
About the Speaker. Prof. Michael Franz is a Professor of Computer Science in UCI's Donald Bren School of Information and Computer Sciences, a Professor of Electrical Engineering and Computer Science (by courtesy) in UCI's Henry Samueli School of Engineering, and the director of UCI’s Secure Systems and Software Laboratory. He received the Dr. sc. techn. and the Dipl. Informatik-Ing. ETH degrees from ETH Zurich.
ETH Zürich, Switzerland
Convergence and divergence in programming language design
The programming language is the programmer’s principal conceptual tool and should be designed like an engineering artifact. This talk will analyze the evolution of ideas in programming language design from the time of Lisp to the time of C#, Python and Haskell, and compare Wirth’s principles with those applied to the design and evolution of Eiffel.
About the Speaker. Bertrand Meyer is professor of software engineering at ETH Zurich and the author of a number of books on software topics.
The University of New South Wales, Australia
Stepwise refinement: from common sense to common practice
Niklaus Wirth's "Program Development by Stepwise Refinement" put into words a principle that could be considered common sense for developing almost anything. Like the very best scientific insights, it inspired subsequently a rich mathematical exploration; in this case it was of programs and the structures that refinement generated on them. I will illustrate that mathematical perspective by some comments on refinement's journey through the 40 years since Wirth's paper, with as a final example some work on computer security that my colleagues and I have carried out only very recently.
But there is another side to this story. There are today many programmers who have never heard of stepwise refinement, invariants, static reasoning, compositionality: those programmers, at least the good ones, are thus instinctive rather than informed. Where did all those concepts go? Why is it not yet common practice in computer science to articulate and then follow them? With that as motivation I will discuss the principles behind an experimental undergraduate course "(In-)formal methods: The lost art" which I have taught for the last three years.
One of those principles is to position the students so that they want to hear what you are about to teach because they have just recently experienced the pain of working without it: first create a hole in their brains; after that, it's easy to fill it. For "informal" refinement, as in Wirth's paper, the result is to make the students themselves ask for the formal techniques from which the informal ones have been abstracted; and then they will direct themselves toward computer-based reasoning tools because they realise they need those tools to help them answer the questions they have already asked. The same approach could apply to teaching programming more generally.
About the Speaker. Carroll Morgan is a professor at the University of New South Wales, having previously (1982-99) been a member of Oxford's Programming Research Group. He is recognised for his work on probabilistic semantics, security, the Z notation, and the Refinement Calculus, with the latter being the focus of his well-known book, Programming from Specifications. He is an editor of ACM's Transactions on Computational Logic, and of Elsevier's Science of Computer Programming, and is a member of the IFIP Working Groups 2.1 (Algorithmic Languages and Calculi) and 2.3 (Programming Methodology). He holds a PhD from the University of Sydney.
The Trouble with Types
It's hard to find a topic that divides programming language enthusiasts more than the issue of static typing. To about one half of the community, types are a godsend. They provide structure to organize systems, prevent whole classes of runtime errors, make IDEs more helpful, and provide a safety net for refactorings. To the other half, static types are burdensome ceremony or worse. They limit the freedom of expression, are in the way of rapid prototyping, make program code bulkier, and making sense of an opaque type error message is often an exercise in frustration.
Personally I am in the camp of static type advocates, even though I respect the opinions of the static type antagonists and sympathize with them. The question is: What needs to happen to make static typing less controversial than it is today? Ideally, a type system should refuse as many erroneous programs as possible, but accept all correct programs one would want to write. It should provide useful documentation about interfaces but get out of the way otherwise. It should be simple to define, with a clear semantics and a guarantee of correctness. But it should also be easy to learn and natural to use. The problem is that each of these criteria can be met in isolation but addressing them jointly requires many difficult tradeoffs.
In the talk I will give an outline some of the main categories of static type systems, as well as some new developments, and discuss the tradeoffs they make.
About the Speaker. Martin Odersky is a professor of computer and communication sciences at EPFL. He leads Programming Methods Group (LAMP), which specializes in structures and patterns of programs as well as languages to express them. He holds a Dr. sc. techn. degree in Computer Science from ETH Zurich.
Languages are in the Eye of the Beholder
Formal languages, be it for programming or other computing purposes, are often designed with the authoring experience in mind. A common consequence are “write-only languages” – languages that make it easy to express one's intentions, but make it difficult to extract someone else’s intentions from an expression. Unfortunately, a large number of formal languages fall into this camp. Perhaps strangely, it is also possible to design read-only languages – languages that are fairly intuitively read, but that are arcane in their syntactic structure, making it difficult to write. There are no universal answers here – both readability and writeability are subjective measures, measures in the eye of the beholder. In this talk, we will explore the space of language design with a particular group of beholders in mind that are not developers. Within a very specific domain of broad interest, we look at a form of “end-user programming” that is most promising.
About the Speaker. Clemens Szyperski is a Principal Development Lead in the Data Platform Division and affiliated with Microsoft Research. From 1999 to 2007, he was also an adjunct professor with Queensland University of Technology, Australia. He has published several books and many papers on component-oriented programming and software architecture. He holds a PhD degree (Dr. sc. techn.) in Computer Science from ETH Zurich, Switzerland and an engineering degree (Dipl. Ing.) in Electrical Engineering/Computer Engineering from RWTH Aachen, Germany.
ETH Zürich, Switzerland
Reviving a computer system of 25 years ago
Between 1987 and 1989 we designed the language and the operating system Oberon. Jürg Gutknecht and I implemented it on the computer Ceres, also an in-house product. The entire software, including file system, document editor, graphics system, compiler, and mail system, including source programs, was described in the book Project Oberon (Addison-Wesley, 1992).
It was Paul Reed, who suggested in 2010 that the book ought to be updated. After all, the processor used became extinct. I decided to take up the challenge and to design my own processor subsequently called RISC. I implemented it with a small, low-cost Spartan-3 development board, building an entire replacement for Ceres. This, however, implied the construction of a new compiler and linker, and the rewriting of the corresponding chapters of the book.
All this provided a welcome opportunity to further simplify and refine both language and system. As a consequence, all parts formerly written in obscure and unpublishable assembler code - such as garbage collector, device drivers, and display pattern generators - are now expressed in Oberon too. Also the entire hardware is displayed in full detail, expressed in the language Verilog.
Thus we present a full scale, compact case study of an entire computing system that is both comprehensive and in daily use.
About the Speaker. Niklaus Wirth was a Professor of Computer Science at ETH Zürich, Switzerland, from 1968 to 1999. His principal areas of contribution were programming languages and methodology, software engineering, and design of personal workstations. He designed the programming languages Algol W, Pascal, Modula-2, and Oberon, was involved in the methodologies of structured programming and stepwise refinement, and designed and built the workstations Lilith and Ceres. He published several text books for courses on programming, algorithms and data structures, and logical design of digital circuits. He has received various prizes and honorary doctorates, including the Turing Award, the IEEE Computer Pioneer, and the Award for outstanding contributions to Computer Science Education.