The integration of weak preemption in Scheme

Get Complete Project Material File(s) Now! »


Some languages have been extended in order to support agent mobility (Kali Scheme [CJK95], Termite Scheme [Ger06], Aglets [LOKK97], Jocaml [CF99]), while other languages have been created especially for mobility (Erlang, LSL, Obliq [Car95] [Car94]). Introducing mo-bility in an existing language can be a hard task because mobility can have an impact on many of the areas we describe here, including fundamental ones such as scheduling or the memory model. Because each language has different features, languages with support for continuations can easily implement strong migration with only the proper communication features. Indeed, if a program is able to copy a continuation from one site to another and then invoke it, it can be seen as a form of strong migration (and for all intents and purposes it is strong migration). This is indeed the approach taken by Termite Scheme, Kali Scheme and others [Sum00]. Stackless Python’s tasklets can also be called strong migration agents, in the sense that their continuation can be serialised, and thus transmitted by the network to be resumed remotely.
Mobility languages based on Java usually lack strong migration, but there are various efforts to implement continuations in Java like Javaflow [Com] or Brakes [COR], or us-ing a special Java Virtual Machine (JVM) [AAB+05] to implement strong migration for Aglets [CFLQ06].

Memory models

In some models, agents are isolated from one another by having separate address space, like processes. This is the case for Erlang, Termite and LSL. Other agents are more similar to threads in that they share the memory address space, such as Aglets, Kali Scheme and Obliq. The difference in memory model leads to differences in communication and synchronisation between the agents which we will discuss below.


Just as there are several memory models, there are various types of scheduling for agents. Agents with a separate memory model are usually scheduled asynchronously like processes. Most shared memory agents are also scheduled asynchronously like threads (Aglets for example). In the case of LSL, agents are event-oriented, and resemble automata, but it is not clear how these agents are scheduled. Stackless Python’s tasklets can be scheduled cooperatively or preemptively depending on the programmer’s choice.


Ideally, agents should be able to communicate together in order to form more complex behaviour than possible with single agents. This communication usually involves synchro-nisation between the agents, so we will talk about these two aspects at the same time. Mobile agents with a shared memory model usually use the shared memory for both syn-chronisation and communication. For example in the case of asynchronous agents such as Obliq and Kali Scheme, locks and condition variables are used for synchronisation between agents while communication is done through shared memory.
Mobile agents without a shared memory model usually resort to channels or mailboxes to communicate values and synchronise their scheduling. Jocaml [CF99], an extension to Objective Caml [Cam] inspired by the Join-Calculus [FG96], uses channels for communi-cation and synchronisation. In Jocaml, a channel is a bi-directional network-transparent pipe created by agents or processes in order to listen to incoming messages and respond to them. An agent can then locate a given channel in a naming service where channels are registered, and communicate with the agent or process on the other end of the channel. A channel is a many-to-one communication means. Stackless Python also uses synchronous channels for its tasklets’ communications, but because there can be several listeners, it is a many-to-many communication means.
Erlang, Termite Scheme and Aglets use mailboxes to communicate between agents. Ar-guably, mailboxes are very similar to channels, with a few differences. In Erlang and Termite Scheme for example, each agent is assigned a globally unique Pid (process identifier) which can be used to send a message to the agent. Each message is sent asynchronously, and queued in a mailbox until the agent decides to read the messages. The agent can decide to read the messages at any point in its execution, and in any order, with the possibility of selecting the message it wants to read first. By communicating Pids in the message, it is possible to obtain synchronous communications between agents.

Data handling

When an agent migrated from one site to another, whether it is strong or weak migration, usually the data used by the agent is affected by migration in one way or another. The simplistic approach is to say that all the data that an agent needs to work on will be migrated along with the agent so it can keep on working on it. This is a simplistic view which poses many problems and the reality is that it is often much more fine-grained.
Agents without a shared memory model usually can migrate with all their data: since it cannot be used by any other agent, it makes no sense to leave it behind. With shared memory it becomes much harder to decide the best way to keep sharing data. If the same data is shared by two agents on one site, and one agent migrates away, suppose the data is copied during migration, if the two agents modify the data and then migrate back on the same site, what sort of synchronisation can we expect from the two different instances of data which used to be shared? This question has no perfect universal solution, and there are many ways to handle the problem.
Obliq, for example, states that free variables (those defined outside of the agent) are trans-formed into transparent network references upon migration. This means that two remote agents would indeed keep on accessing a unique value across the network.
Sometimes it is not sufficient to know what should happen of variables or references after migration, because it is possible to have the same instance of a data stored into different variables or references of differing types. In some cases we also have to know what happens to values, as opposed to variables or references.
In λdist [SY97] for example, it is possible to associate very fine-grained migration behaviour on values themselves. In this system, a value can be marked to be copied during migration, to always stay on the same location and be accessed remotely after migration, to move along with the agent and leave a remote reference on the departing site, to become undefined after migration in the agent, to go with the agent and become undefined on the departing site, or to be dynamically rebound after migration. This last example is called ubiquity : the ability for a value (or a reference) to have a similar (or not) value on each site, which will always be dynamically rebound after migration.
Kali Scheme has yet another view on the subject and proposes a notion of address space, in which objects are allocated, and a special notion of proxies to these objects. A proxy object points to a value in a given address space, but this value can be affected and read in any other address space, without synchronisation. With this type of proxy, it is not possible to remotely affect a value, one has to move to the proxied value’s address space to do so. On the other hand, it enables a sort of ubiquity to remote values.

READ  The sand forest of tembe elephant park and environs, maputaland


Obviously, mobility poses security questions. Simply allowing any incoming agent to execute any code can be a security problem. There are various possibilities to treat this problem: it is possible to trust some agents depending on its origin (within a trusted network for example), or based on some trusted signature of its code. It is possible to have a finer-grained security as well: by limiting certain operations for agents with insufficient privileges, or limiting their execution time.

Why we would need yet another mobile agent language

Many systems offer memory barriers across agents, whether they are located on the same site or on different sites. While this ensures that there cannot be asynchronous modification to shared data, notorious for causing problems, we believe the cost of not sharing memory within the same site is too high to justify this solution. Indeed we would like our agents to be able to communicate using shared memory on the same site, in a manner which is free from asynchronous interferences.
Those systems which allow shared memory between agents often allow network references, which is just another asynchronous way to share memory across sites. We believe on the other hand that if we are to solve asynchronous interference locally, we cannot solve it across sites, due to the very asynchronous nature of networks which can be disconnected. Even though we want shared memory on a shared site, we want to prevent asynchronous modifications across sites.
Finally, we believe that agent does not have to be asynchronous on a common site, and that in fact we want to have a deterministic scheduling where agents have a fine-grained control of the scheduling. We are not going to discuss the security aspects of agent systems.
In order to fulfill these goals and requirements, we present ULM: a language designed from the start for mobile agents, with local deterministic scheduling and shared memory, and isolation from remote interference (inherently asynchronous) to a few well-known locations. The following chapter will describe the language, followed by the description of its imple-mentation.

Table of contents :

1 Introduction 
2 State of the art 
2.1 The various forms of mobility
2.2 Uses of mobility
2.3 Differences between agent systems
2.3.1 Language
2.3.2 Memory models
2.3.3 Scheduling
2.3.4 Communication
2.3.5 Data handling
2.3.6 Security
2.4 Why we would need yet another mobile agent language
3 Language 
3.1 Origins of ULM
3.1.1 Local thread scheduling
3.1.2 Migration
3.1.3 Non-determinism isolation
3.2 Features
3.2.1 Scheme embedding
3.2.2 Threads
3.2.3 Signals
3.2.4 Suspension
3.2.5 Weak Preemption
3.2.6 The integration of weak preemption in Scheme
3.2.7 The interaction between when, watch and finally
3.2.8 Exceptions
3.2.9 Migration
3.2.10 References
3.2.11 Mixins
3.2.12 Modules
3.3 Semantics
3.3.1 Notation
3.3.2 Evaluation
3.3.3 Scheduling
3.4 Implications of Migration
3.4.1 The parting of ways
3.4.2 Things that need more work
4 Implementation: Scheme 
4.1 Two Virtual Machines
4.1.1 Why a virtual machine?
4.1.2 The first virtual machine
4.1.3 Why two (3?) virtual machines?
4.1.4 The Java VM
4.2 Bytecode compilation and interpretation
4.2.1 Some required introduction
4.2.2 Constants
4.2.3 Variable reference
4.2.4 Variable affectation
4.2.5 Conditional
4.2.6 Invocation
4.2.7 Abstraction
4.2.8 let, let* and letrec
4.2.9 Protection
4.2.10 Strong preemption
4.2.11 Miscellaneous bytecodes
4.3 The OULM file format
4.3.1 Overall Structure
4.3.2 Header
4.3.3 Constants
4.3.4 Module Information
4.3.5 Global Variables
4.3.6 Function Descriptors
4.3.7 Attributes
4.3.8 Example
5 Implementation: ULM 
5.1 ULM bytecodes
5.1.1 Thread creation
5.1.2 Agent creation
5.1.3 Suspension
5.1.4 Weak preemption
5.1.5 A word about bytecodes vs. primitives
5.1.6 Signal Creation
5.1.7 Signal Emission
5.1.8 Signal Awaiting
5.1.9 Cooperation
5.1.10 Migration
5.1.11 References
5.2 Migration
5.2.1 Transporting agents
5.2.2 The use of modules for agents
5.2.3 What happens to data
5.2.4 What happens to the bytecode
5.3 Scheduling
5.3.1 The End of Action phase
5.3.2 Wait queues and their recycling
5.3.3 Fast lanes for simple waits
5.3.4 Planning weak preemption
5.3.5 Minimal End Of Instant
6 Native interface (s) 
6.1 Syntax
6.2 Bigloo backend modules
6.3 Java backend modules
6.4 Reentry
6.4.1 Unifying protectors and exception handlers
7 Reactive Event Loop 
7.1 Event loops and ULM
7.1.1 Presentation of the event loop
7.1.2 Why we need event loops
7.2 The new signals
7.2.1 The IO signal
7.2.2 The timeout signal
7.2.3 The idle signal
7.3 Example
7.3.1 Event Loop
7.3.2 Reactive Event Loop
7.4 Implementation
7.4.1 Which REL signals are watched?
7.4.2 The IO signals
7.4.3 The timeout signals
7.4.4 The idle signal
7.4.5 Integration with the EOI
7.4.6 Future improvements
7.5 Integrating two event loops
7.5.1 GTK
7.5.2 Swing
7.5.3 J2ME
7.6 Conclusion
8 Examples/Applications 
8.1 Load balancing
8.2 Agents for reconfiguration
8.2.1 The motivations
8.2.2 The departure airport
8.2.3 The Fractal component transporter
9 Directions 
9.1 Debug
9.1.1 Debugging the compiler
9.1.2 Debugging the VM
9.1.3 Debugging ULM programs
9.2 Other enhancements
9.2.1 A global garbage collector
9.2.2 Mixin enhancements
9.2.3 Miscellaneous enhancements
10 Conclusion 
A.1 Introduction
A.2 Lurc features
A.2.1 Different types of threads
A.2.2 Cooperative deterministic scheduling
A.2.3 Signals
A.2.4 Integrated syntax
A.2.5 Control blocks
A.2.6 Event loop integration
A.2.7 Garbage Collector
A.2.8 Modularity
A.3 Implementation
A.3.1 Threads
A.3.2 Scheduling
A.3.3 Syntactic sugar
A.4 Related work
A.4.1 POSIX threads
A.4.2 Event loops
A.4.3 GNU/Pth
A.4.4 Fair Threads
A.4.5 Reactive ML
A.4.6 ULM
A.5 Benchmarks
A.5.1 Pthread and GNU/Pth
A.5.2 LOFT and RML
A.5.3 Cellular Automata
A.6 Future directions
A.6.1 Reactive Event Loop
A.6.2 New semantics for SMP
A.7 Conclusion


Related Posts