The essence of open programming is the ability to import code at runtime, in order to allow applications to adapt their behaviour and functionality dynamically. However, when code is acquired from untrusted sources - e.g. some
remote Internet domain - security concerns arise. Untrusted code should not be given unrestricted access to local resources. A well-known solution is to run untrusted code in a sandbox, a software environment which dynamically checks all critical operations performed by the code according to some configurable security policy. Java is the most popular language employing this approach.
In this thesis, we develop a model based on an idealised subset of the ML programming language to describe the sandboxing approach. After gaining an understanding of the underlying principles of a sandbox, we show how to
apply these to get a working implementation for the Alice ML programming language. Alice ML is a dialect of Standard ML that has been specifically
designed to support type-safe open programming. It provides a generic and strongly-typed import/export facility (pickling), which allows processes to exchange or make persistent almost arbitrary language-level data structures
and code. For security reasons, resources are precluded from pickling. Instead, it is left to the target site to supply them explicitly to imported code. A flexible notion of component allows respective abstractions to be programmed
conveniently. We show how these particular characteristics of Alice ML can be used to create a simple, yet powerful, sandbox design with an emphasis on modularity and extensibility.