PLC Session 10

This Tenth session talked about Concurrency. Our friend’s group is the one who brought the presentation. It was divided into some topics that can be seen below.

  • Introduction
  • Introduction to Subprogram-Level Concurrency
  • Semaphores
  • Monitors
  • Message Passing

First topic is about Introduction, In this introduction we were introduced by our friends that concurrency in software execution can occur at four different levels: instruction level (executing two or more machine instructions simultaneously), statement level (executing two or more high-level language statements simultaneously), unit level (executing two or more subprogram units simultaneously), and program level (executing two or more programs simultaneously). Concurrent control mechanisms increase programming flexibility. They were originally invented to be used for particular problems faced in operating systems, but they are required for a variety of other programming applications. The goal of developing concurrent software is to produce scalable and portable concurrent algorithms. A concurrent algorithm is scalable if the speed of its execution increases when more processors are available. Next topic is about Introduction to Subprogram-Level Concurrency A task is a unit of a program, similar to a subprogram, that can be in concurrent execution with other units of the same program. Each task in a program can support one thread of control. Tasks are sometimes called processes. In some languages, for example Java and C#, certain methods serve as tasks. Such methods are executed in objects called threads. Three characteristics of tasks distinguish them from subprograms. First, a task may be implicitly started, whereas a subprogram must be explicitly called. Second, when a program unit invokes a task, in some cases it need not wait for the task to complete its execution before continuing its own. Third, when the execution of a task is completed, control may or may not return to the unit that started that execution. Tasks fall into two general categories: heavyweight and lightweight. Simply stated, a heavyweight task executes in its own address space. Lightweight tasks all run in the same address space. We also learned that Tasks can be in several different states: New, A task is in the new state when it has been created but has not yet begun its execution. Ready, A ready task is ready to run but is not currently running. Tasks that are ready to run are stored in a queue that is often called the task ready queue. Running, A running task is one that is currently executing; that is, it has a processor and its code is being executed. Blocked, A task that is blocked has been running, but that execution was interrupted by one of several different events, the most common of which is an input or output operation. Dead, A dead task is no longer active in any sense. A task dies when its execution is completed or it is explicitly killed by the program.

In a concurrent environment and with shared resources, the liveness of a task can cease to exist, meaning that the program cannot continue and thus will never terminate. For example, suppose task A and task B both need the shared resources X and Y to complete their work. Furthermore, suppose that task A gains possession of X and task B gains possession of Y. After some execution, task A needs resource Y to continue, so it requests Y but must wait until B releases it. Likewise, task B requests X but must wait until A releases it. Neither relinquishes the resource it possesses, and as a result, both lose their liveness, guaranteeing that execution of the program will never complete normally. This particular kind of loss of liveness is called deadlock. Deadlock is a serious threat to the reliability of a program, and therefore its avoidance demands serious consideration in both language and program design. Let’s move on to next topic about semaphores, A semaphore is a simple mechanism that can be used to provide synchronization of tasks. Although semaphores are an early approach to providing synchronization, they are still used, both in contemporary languages and in library-based concurrency support systems. To provide limited access to a data structure, guards can be placed around the code that accesses the structure. A guard is a linguistic device that allows the guarded code to be executed only when a specified condition is true. So, a guard can be used to allow only one task to access a shared data structure at a time. A semaphore is an implementation of a guard. Specifically, a semaphore is a data structure that consists of an integer and a queue that stores task descriptors. A task descriptor is a data structure that stores all of the relevant information about the execution state of a task.

Now we move to Monitors The idea is to encapsulate shared data structures with their operations and hide their representations—that is, to make shared data structures abstract data types with some special restrictions. This solution can provide competition synchronization without semaphores by transferring responsibility for synchronization to the run-time system. One of the most important features of monitors is that shared data is resident in the monitor rather than in any of the client units implementation of a monitor can be made to guarantee synchronized access by allowing only one access at a time. Although mutually exclusive access to shared data is intrinsic with a monitor, cooperation between processes is still the task of the programmer. Different languages provide different ways of programming cooperation synchronization, all of which are related to semaphores. Monitors are a better way to provide competition synchronization than are semaphores, primarily because of the problems of semaphores The cooperation synchronization is still a problem with monitors, as will be clear when Ada and Java implementations of monitors are discussed in the following sections. Semaphores and monitors are equally powerful at expressing concurrency control—semaphores can be used to implement monitors and monitors can be used to implement semaphores. Last topic is about message passing, Message passing can be either synchronous or asynchronous. Here, we describe synchronous message passing. The basic concept of synchronous message passing is that tasks are often busy, and when busy, they cannot be interrupted by other units A task can be designed so that it can suspend its execution at some point, either because it is idle or because it needs information from another unit before it can continue.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *