Valid behavior can be modeled as the possible sequences of events that may be observed of a conforming concrete implementation of the abstract program. In this article, we address the problem of how to select event sequences from an abstract program to test its concrete implementation. Sequencing… Expand. View via Publisher.
Save to Library Save. Create Alert Alert. Share This Paper. Methods Citations. Results Citations. Figures and Tables from this paper. Citation Type. Has PDF. Publication Type. More Filters.
Use of sequencing constraints for specification-based testing of concurrent programs. Stateless techniques for generating global and local test oracles for message-passing concurrent programs. View 10 excerpts, cites background, methods and results. Constraint-based reachability. Computer Science, Mathematics. Moreover, a progra Documents: Advanced Search Include Citations.
Authors: Advanced Search Include Citations. The keyword separate specifies that the referenced objects may be handled by a processor different from the current one. A creation instruction on a separate entity such as producer will create an object on another processor; by default the instruction also creates that processor.
Both the producer and the consumer access an unbounded buffer in feature calls such as buffer. To ensure exclusive access, the consumer must lock the buffer before accessing it. Such targets are called controlled. For instance, in consume , buffer is a formal argument; the consumer has exclusive access to the buffer while executing consume.
Condition synchronization relies on preconditions after the require keyword to express wait conditions. Any precondition of the form x.
For example, the precondition of consume delays the execution until the buffer is not empty. As the buffer is unbounded, the corresponding producer feature does not need a wait condition. The runtime system ensures that the result of the call buffer.
The following description is abstract; actual implementations may differ. Each processor maintains a request queue of requests resulting from feature calls on other processors. A non-separate feature call can be processed right away without going through the request queue; the processor creates a non-separate feature request for itself and processes it right away using its call stack.
The supplier will process the feature requests in the order of queuing. The runtime system includes a scheduler , which serves as an arbiter between processors. When a processor is ready to process a feature request in its request queue, it will only be able to proceed after the request is satisfiable. For this purpose, the processor sends a locking request to the scheduler, which stores the request in a queue and schedules satisfiable requests for application.
Once the scheduler satisfies the request, the processor starts an execution step. Whenever a processor is ready to let go of the obtained locks, i. Each locked processor will unlock itself as soon as it processed all previous feature requests.
In the example, the producer issues an unlock request to the buffer after it issued a feature request for put. Their notion of logical thread schedules helps keep the size of the log file small. Section 4.
As demonstrated in Section 2 , a number of effective approaches to the problem of deterministic replay of multithreaded programs exist. For executions on uniprocessor systems, the approach of Russinovich and Cogswell [ 13 ] has been shown to outperform techniques that try to record how threads interact.
They propose to log thread scheduler information and to enforce the same schedule when a run is replayed. This approach also works well in our case. To minimize the overhead from capturing physical processor schedules — the equivalent of physical thread schedules in the case of SCOOP — we adapt the notion of logical thread schedules from [ 3 ].
This section describes this adaptation. Consider a share market application with investors, markets, issuers, and shares. The markets and the investors are handled by different processors. Listing 1 shows the class for the investors. Each investor has a feature to buy a share. To execute it, the investor must wait for the lock on the market and for the precondition to be satisfied. The following feature initiates a transaction that involves two investors and one market with shares from two issuers:.
Figure 1 depicts a number of possible physical processor schedules for this example. In schedule c , the second investor buys its share before the first investor does. Schedules a and b give rise to the same behavior on the market, whereas schedule c causes the transaction to be reversed: the second investors gets to buy its share first.
The reason is that changes in the update of local variables do not influence shared objects, whereas the order of critical events does. We regard two physical processor schedules as equivalent if they have the same order of locking requests. A logical processor schedule denotes an equivalence class of physical processor schedules, i. A logical processor schedule consists of one interval list per processor. The scheduler uses a global counter with value c o u n t e r g to number the approved locking requests.
An interval [ l , u ] is defined by a lower global counter value l and an upper global counter value u , such that the locking requests with numbers in [ l , u ] belong to the same processor and no locking request with a number in an adjacent interval belongs to the same processor. Once the recorder is activated, the scheduler executes Algorithm 1. To detect when a new interval should start, the scheduler maintains for each processor a local counter with value c o u n t e r l and a local counter base with value b a s e l.
0コメント