PLC Session 13

This marks the last session that we had in semester one. This time our friend’s group will brought the presentation for us. This session talked about Logic Programming Languages As usual This session is divided into some topics.

  • Introduction
  • A Brief Introduction to Predicate Calculus
  • An Overview of Logic Programming
  • The Origins of Prolog
  • The Basic Elements of Prolog
  • Deficiencies of Prolog
  • Applications of Logic Programming

First, we will talk about Introduction,Programming that uses a form of symbolic logic as a programming language is often called logic programming, and languages based on symbolic logic are called logic programming languages, or declarative languages. The syntax of logic programming languages is remarkably different from that of the imperative and functional languages. Next topic talks about A Brief Introduction to Predicate Calculus. A proposition can be thought of as a logical statement that may or may not be true. It consists of objects and the relationships among objects. Formal logic was developed to provide a method for describing propositions, with the goal of allowing those formally stated propositions to be checked for validity. Symbolic logic can be used for the three basic needs of formal logic: to express propositions, to express the relationships between propositions, and to describe how new propositions can be inferred from other propositions that are assumed to be true. The particular form of symbolic logic that is used for logic programming is called first-order predicate calculus.The simplest propositions, which are called atomic propositions, consist of compound terms. A compound term is one element of a mathematical relation, written in a form that has the appearance of mathematical function notation. A compound term is composed of two parts: a functor, which is the function symbol that names the relation, and an ordered list of parameters, which together represent an element of the relation. A compound term with a single parameter is a 1-tuple; one with two parameters is a 2-tuple, and so forth. For example, we might have the two propositions
man(jake)
like(bob, steak)
which state that {jake} is a 1-tuple in the relation named man, and that {bob, steak} is a 2-tuple in the relation named like. Propositions can be stated in two modes: one in which the proposition is defined to be true, and one in which the truth of the proposition is something that is to be determined. In other words, propositions can be stated to be facts or queries. The example propositions could be either. One problem with predicate calculus as we have described it thus far is that there are too many different ways of stating propositions that have the same meaning; that is, there is a great deal of redundancy. Next we are going to discuss about An Overview of Logic Programming. In this topic we learned that Languages used for logic programming are called declarative languages, because programs written in them consist of declarations rather than assignments and control flow statements. These declarations are actually statements, or propositions, in symbolic logic. One of the essential characteristics of logic programming languages is their semantics, which is called declarative semantics. We also learned that declarative semantics is far simpler than the semantics of imperative languages. Programming in a logic programming language is nonprocedural An example commonly used to illustrate the difference between procedural and nonprocedural systems is sorting. In a language like Java, sorting is done by explaining in a Java program all of the details of some sorting algorithm to a computer that has a Java compiler. The computer, after translating the Java program into machine code or some interpretive intermediate code, follows the instructions and produces the sorted list. In a nonprocedural language, it is necessary only to describe the characteristics of the sorted list: It is some permutation of the given list such that for each pair of adjacent elements, a given relationship holds between the two elements. To state this formally, suppose the list to be sorted is in an array named list that has a subscript range 1 . . . n. Let’s move on to the next topic about The Origins of Prolog. The development of Prolog and other research efforts in logic programming received limited attention outside of Edinburgh and Marseille until the announcement in 1981 that the Japanese government was launching a large research project called the Fifth Generation Computing Systems.  One of the primary objectives of the project was to develop intelligent machines, and Prolog was chosen as the basis for this effort. The announcement of FGCS aroused in researchers and the governments of the United States and several European countries a sudden strong interest in artificial intelligence and logic programming. Next topic is about The Basic Elements of Prolog. Variables are not bound to types by declarations. The binding of a value, and thus a type, to a variable is called an instantiation. Instantiation occurs only in the resolution process. A variable that has not been assigned a value is called uninstantiated The last kind of term is called a structure. Structures represent the atomic propositions of predicate calculus, and their general form is the same: functor(parameter list) .

The functor is any atom and is used to identify the structure.The parameter list can be any list of atoms, variables, or other structures. The other basic form of Prolog statement for constructing the database corresponds to a headed Horn clause. This form can be related to a known theorem in mathematics from which a conclusion can be drawn if the set of given conditions is satisfied. Next we are going to talk about deficiencies of prolog. Prolog, for reasons of efficiency, allows the user to control the ordering of pattern matching during resolution. In a pure logic programming environment, the order of attempted matches that take place during resolution is nondeterministic, and all matches could be attempted concurrently. However, because Prolog always matches in the same order, starting at the beginning of the database and at the left end of a given goal, the user can profoundly affect efficiency by ordering the database statements to optimize a particular application. The last topic is about applications of Logic Programming .Relational database management systems (RDBMSs) store data in the form of tables. Queries on such databases are often stated in Structured Query Language (SQL). SQL is nonprocedural in the same sense that logic programming is nonprocedural. The user does not describe how to retrieve the answer; rather, he or she describes only the characteristics of the answer. One of the advantages of using logic programming to implement an RDBMS is that only a single language is required. Another advantage of using logic programming to implement an RDBMS is that deductive capability is built in. We also learned about Expert system, Expert systems are computer systems designed to emulate human expertise in some particular domain. They consist of a database of facts, an inferencing process, some heuristics about the domain, and some friendly human interface that makes the system appear much like an expert human consultant. Prolog can and has been used to construct expert systems. It can easily fulfill the basic needs of expert systems, using resolution as the basis for query processing, using its ability to add facts and rules to provide the learning capability, and using its trace facility to inform the user of the “reasoning” behind a given result.

Posted in Uncategorized | Leave a comment

PLC Session 12

This Session discuss about Functional Programming Languages. As usual it was brought by our friend’s group and divided into some topics

  • Introduction
  • Mathematical Functions
  • Fundamentals of Functional Programming Languages
  • The First Functional Programming Language: LISP
  • An Introduction to Scheme
  • Common LISP
  • A Comparison of Functional and Imperative Languages

Without further ado lets talk about the first topic which is introduction.The functional programming paradigm, which is based on mathematical functions, is the design basis of the most important non-imperative styles of languages. This style of programming is supported by functional programming languages. One of the fundamental characteristics of programs written in imperative languages is that they have state, which changes throughout the execution process. This state is represented by the program’s variables. Next topic is about mathematical functions. A mathematical function is a mapping of members of one set, called the domain set, to another set, called the range set. A function definition specifies the domain and range sets, either explicitly or implicitly, along with the mapping. The mapping is described by an expression or, in some cases, by a table. One of the fundamental characteristics of mathematical functions is that the evaluation order of their mapping expressions is controlled by recursion and conditional expressions, rather than by the sequencing and iterative repetition that are common to the imperative programming languages.Another important characteristic of mathematical functions is that because they have no side effects and cannot depend on any external values, they always map a particular element of the domain to the same element of the range. Function definitions are often written as a function name, followed by a list of parameters in parentheses, followed by the mapping expression. For example,

cube(x) K x * x * x,

where x is a real number. In this definition, the domain and range sets are the real numbers. The symbol K is used to mean “is defined as.” The parameter x can represent any member of the domain set, but it is fixed to represent one specific element during evaluation of the function expression. This is one way the parameters of mathematical functions differ from the variables in imperative languages. Let’s move to Fundamentals of Functional Programming Languages. In this topic we learned that the objective of the design of a functional programming language is to mimic mathematical functions to the greatest extent possible. This results in an approach to problem solving that is fundamentally different from approaches used with imperative languages. In an imperative language, an expression is evaluated and the result is stored in a memory location, which is represented as a variable in a program. This is the purpose of assignment statements. Next, there is  The First Functional Programming Language: LISP. There were only two categories of data objects in the original LISP: atoms and lists. List elements are pairs, where the first part is the data of the element, which is a pointer to either an atom or a nested list. The second part of a pair can be a pointer to an atom, a pointer to another element, or the empty list. Elements are linked together in lists with the second parts. Atoms and lists are not types in the sense that imperative languages have types. In fact, the original LISP was a typeless language. Atoms are either symbols, in the form of identifiers, or numeric literals. LISP originally used lists as its data structure because they were thought to be an essential part of list processing. Next topic is an introduction to scheme. Scheme is characterized by its small size, its exclusive use of static scoping, and its treatment of functions as first-class entities. As first-class entities, Scheme functions can be the values of expressions, elements of lists, passed as parameters, and returned from functions. A Scheme program is a collection of function definitions. Consequently, knowing how to define these functions is a prerequisite to writing the simplest program. In Scheme, a nameless function actually includes the word LAMBDA, and is called a lambda expression. For example,
(LAMBDA (x) (* x x))
is a nameless function that returns the square of its given numeric parameter. A predicate function is one that returns a Boolean value (some representation of either true or false). Scheme uses three different constructs for control flow: one similar to the selection construct of the imperative languages and two based on the evaluation control used in mathematical functions. Next we learned about common LISP,Common LISP (Steele, 1990) was created in an effort to combine the features of several early 1980s dialects of LISP, including Scheme, into a single language. Being something of a union of languages, it is quite large and complex, similar in these regards to C++ and C#. Its basis, however, is the original LISP, so its syntax, primitive functions, and fundamental nature come from that language. In a sense, Scheme and Common LISP are opposites. Scheme is far smaller and semantically simpler, in part because of its exclusive use of static scoping, but also because it was designed to be used for teaching programming, whereas Common LISP was meant to be a commercial language. Common LISP has succeeded in being a widely used language for AI applications, among other areas. Scheme, on the other hand, is more frequently used in college courses on functional programming. It is also more likely to be studied as a functional language because of its relatively small size. And the last we talked about  A Comparison of Functional and Imperative Languages Functional languages can have a very simple syntactic structure. The list structure of LISP, which is used for both code and data, clearly illustrates this. The syntax of the imperative languages is much more complex. This makes them more difficult to learn and to use. The semantics of functional languages is also simpler than that of the imperative languages.

Execution efficiency is another basis for comparison. When functional programs are interpreted, they are of course much slower than their compiled imperative counterparts. However, there are now compilers for most functional languages, so that execution speed disparities between functional languages and compiled imperative languages are no longer so great. One might be tempted to say that because functional programs are significantly smaller than equivalent imperative programs, they should execute much faster than the imperative programs. Another source of the difference in execution efficiency between functional and imperative programs is the fact that imperative languages were designed to run efficiently on von Neumann architecture computers, while the design of functional languages is based on mathematical functions. This gives the imperative languages a large advantage. Functional languages have a potential advantage in readability. In many imperative programs, the details of dealing with variables obscure the logic of the program.

Posted in Uncategorized | Leave a comment

PLC Session 11

In this session we talked about Exception handling. As i said before, our group was initially presents subprogram but then we were ordered to rearrange our schedule. So, this session was presented by our group. This sessions are divided into a bunch of  topics

  • Introduction to Exception Handling
  • Exception Handling in C++
  • Introduction to Event Handling
  • Event Handling with Java
  • Event Handling in C#

So let’s talk about the topics that we are going to discuss. First topic is about  introduction Most computer hardware systems are capable of detecting certain run-time error conditions, such as floating-point overflow. There is a category of serious errors that are not detectable by hardware but can be detected by code generated by the compiler. For example, array subscript range errors are almost never detected by hardware, rbut they lead to serious errors that often are not noticed until later in the program execution. Accordingly, we define exception to be any unusual event, erroneous or not, that is detectable by either hardware or software and that may require special processing. The special processing that may be required when an exception is detected is called exception handling. This processing is done by a code unit or segment called an exception handler. An exception is raised when its associated event occurs. Next topic is about Exception Handling in C++. C++ uses a special construct that is introduced with the reserved word try for this purpose. A try construct includes a compound statement called the try clause and a list of exception handlers. The compound statement defines the scope of the following handlers. The general form of this construct is

try {
//** Code that might raise an exception
}
catch(formal parameter) {
//** A handler body
}
. . .
catch(formal parameter) {
//** A handler body
}

Each catch function is an exception handler. A catch function can have only a single formal parameter, which is similar to a formal parameter in a function definition in C++.After a handler has completed its execution, control flows to the first statement following the try construct an exception, using a throw without an expression, in which case that exception is propagated. Next topic is about Introduction to Event Handling. Event handling is similar to exception handling. In both cases, the handlers are implicitly called by the occurrence of something, either an exception or an event. While exceptions can be created either explicitly by user code or implicitly by hardware or a software interpreter, events are created by external actions, such as user interactions through a graphical user interface (GUI). An event is a notification that something specific has occurred, such as a mouse click on a graphical button. Strictly speaking, an event is an object that is implicitly created by the run-time system in response to a user action, at least in the context in which event handling is being discussed here. An event handler is a segment of code that is executed in response to the appearance of an event. Event handlers enable a program to be responsive to user actions. Common use of event handlers is to check for simple errors and omissions in the elements of a form, either when they are changed or when the form is submitted to the Web server for processing. Using event handling  on the browser to check the validity of form data saves the time of sending that data to the server, where their correctness then must be checked by a server-resident program or script before they can be processed. Let’s go to the next topics, we were talking about Event Handling with Java.

When a user interacts with a GUI component, for example by clicking a button, the component creates an event object and calls an event handler through an object called an event listener, passing the event object. The event handler provides the associated actions. GUI components are event generators; they generate events. In Java, events are connected to event handlers through event listeners. Event listeners are connected to event generators through event listener registration. Listener registration is done with a method of the class that implements the listener interface, as described later in this section. Only event listeners that are registered for a specific event are notified when that event occurs. The listener method that receives the message implements an event handler. To make the event-handling methods conform to a standard protocol, an interface is used. An interface prescribes standard method protocols but does not provide implementations of those methods.Event handling in C# is the last topic, Event handling in C#  is similar to that of Java. .NET provides two approaches to creating GUI’s in applications, the original Windows Forms and the more recent Windows Presentation Foundation. The latter is the more sophisticated and complex of the two. All C# event handlers have the same protocol: the return type is void and the two parameters are of types object and EventArgs. Neither of the parameters needs to be used for a simple situation. An event handler method can have any name.

Posted in Uncategorized | Leave a comment

PLC Session 10

This Tenth session talked about Concurrency. Our friend’s group is the one who brought the presentation. It was divided into some topics that can be seen below.

  • Introduction
  • Introduction to Subprogram-Level Concurrency
  • Semaphores
  • Monitors
  • Message Passing

First topic is about Introduction, In this introduction we were introduced by our friends that concurrency in software execution can occur at four different levels: instruction level (executing two or more machine instructions simultaneously), statement level (executing two or more high-level language statements simultaneously), unit level (executing two or more subprogram units simultaneously), and program level (executing two or more programs simultaneously). Concurrent control mechanisms increase programming flexibility. They were originally invented to be used for particular problems faced in operating systems, but they are required for a variety of other programming applications. The goal of developing concurrent software is to produce scalable and portable concurrent algorithms. A concurrent algorithm is scalable if the speed of its execution increases when more processors are available. Next topic is about Introduction to Subprogram-Level Concurrency A task is a unit of a program, similar to a subprogram, that can be in concurrent execution with other units of the same program. Each task in a program can support one thread of control. Tasks are sometimes called processes. In some languages, for example Java and C#, certain methods serve as tasks. Such methods are executed in objects called threads. Three characteristics of tasks distinguish them from subprograms. First, a task may be implicitly started, whereas a subprogram must be explicitly called. Second, when a program unit invokes a task, in some cases it need not wait for the task to complete its execution before continuing its own. Third, when the execution of a task is completed, control may or may not return to the unit that started that execution. Tasks fall into two general categories: heavyweight and lightweight. Simply stated, a heavyweight task executes in its own address space. Lightweight tasks all run in the same address space. We also learned that Tasks can be in several different states: New, A task is in the new state when it has been created but has not yet begun its execution. Ready, A ready task is ready to run but is not currently running. Tasks that are ready to run are stored in a queue that is often called the task ready queue. Running, A running task is one that is currently executing; that is, it has a processor and its code is being executed. Blocked, A task that is blocked has been running, but that execution was interrupted by one of several different events, the most common of which is an input or output operation. Dead, A dead task is no longer active in any sense. A task dies when its execution is completed or it is explicitly killed by the program.

In a concurrent environment and with shared resources, the liveness of a task can cease to exist, meaning that the program cannot continue and thus will never terminate. For example, suppose task A and task B both need the shared resources X and Y to complete their work. Furthermore, suppose that task A gains possession of X and task B gains possession of Y. After some execution, task A needs resource Y to continue, so it requests Y but must wait until B releases it. Likewise, task B requests X but must wait until A releases it. Neither relinquishes the resource it possesses, and as a result, both lose their liveness, guaranteeing that execution of the program will never complete normally. This particular kind of loss of liveness is called deadlock. Deadlock is a serious threat to the reliability of a program, and therefore its avoidance demands serious consideration in both language and program design. Let’s move on to next topic about semaphores, A semaphore is a simple mechanism that can be used to provide synchronization of tasks. Although semaphores are an early approach to providing synchronization, they are still used, both in contemporary languages and in library-based concurrency support systems. To provide limited access to a data structure, guards can be placed around the code that accesses the structure. A guard is a linguistic device that allows the guarded code to be executed only when a specified condition is true. So, a guard can be used to allow only one task to access a shared data structure at a time. A semaphore is an implementation of a guard. Specifically, a semaphore is a data structure that consists of an integer and a queue that stores task descriptors. A task descriptor is a data structure that stores all of the relevant information about the execution state of a task.

Now we move to Monitors The idea is to encapsulate shared data structures with their operations and hide their representations—that is, to make shared data structures abstract data types with some special restrictions. This solution can provide competition synchronization without semaphores by transferring responsibility for synchronization to the run-time system. One of the most important features of monitors is that shared data is resident in the monitor rather than in any of the client units implementation of a monitor can be made to guarantee synchronized access by allowing only one access at a time. Although mutually exclusive access to shared data is intrinsic with a monitor, cooperation between processes is still the task of the programmer. Different languages provide different ways of programming cooperation synchronization, all of which are related to semaphores. Monitors are a better way to provide competition synchronization than are semaphores, primarily because of the problems of semaphores The cooperation synchronization is still a problem with monitors, as will be clear when Ada and Java implementations of monitors are discussed in the following sections. Semaphores and monitors are equally powerful at expressing concurrency control—semaphores can be used to implement monitors and monitors can be used to implement semaphores. Last topic is about message passing, Message passing can be either synchronous or asynchronous. Here, we describe synchronous message passing. The basic concept of synchronous message passing is that tasks are often busy, and when busy, they cannot be interrupted by other units A task can be designed so that it can suspend its execution at some point, either because it is idle or because it needs information from another unit before it can continue.

Posted in Uncategorized | Leave a comment

PLC Session 9

This sessions was brought by our friend’s group that talked about Object oriented programming, This session is divided into some topics.

  • Introduction
  • Object-Oriented Programming
  • Design Issues for Object-Oriented Languages
  • Support for Object-Oriented Programming in C++
  • Implementation of Object-Oriented Constructs

Languages that support object-oriented programming now are firmly entrenched in the mainstream.Some of the newer languages that were designed to support object-oriented programming do not support other programming paradigms but still employ some of the basic imperative structures and have the appearance of the older imperative languages. Among these are Java and C#. Let’s move on to the next topic which is Object-Oriented Programming.The concept of object-oriented programming had its roots in SIMULA 67 but was not fully developed until the evolution of Smalltalk resulted in Smalltalk 80. We also knew that A language that is object oriented must provide support for three key language features: abstract data types, inheritance, and dynamic binding of method calls to methods. We will discuss inheritance first and pass the abstract data type remembering that we have discussed the topic before. Inheritance offers a solution to both the modification problem posed by abstract data type reuse and the program organization problem .The following are the most common differences between a parent class and its subclasses :
1. The parent class can define some of its variables or methods to have private access, which means they will not be visible in the subclass.
2. The subclass can add variables and/or methods to those inherited from the parent class.
3. The subclass can modify the behavior of one or more of its inherited methods. A modified method has the same name, and often the same protocol, as the one of which it is a modification.

If a new class is a subclass of a single parent class, then the derivation process is called single inheritance. If a class has more than one parent class, the process is called multiple inheritance. When a number of classes are related through single inheritance, their relationships to each other can be shown in a derivation tree. The third characteristic (after abstract data types and inheritance) of object oriented programming languages is a kind of polymorphism provided by the dynamic binding of messages to method definitions. This is sometimes called dynamic dispatch. If a client of A and B has a variable that is a reference to class A’s objects, that reference also could point at class B’s objects, making it a polymorphic reference. One purpose of dynamic binding is to allow software systems to be more easily extended during both development and maintenance. There are some design issues with Object oriented programming that will shown below

  • The Exclusivity of Objects
  • Question regarding subclass and subtype
  • Single and Multiple Inheritance
  • Allocation and Deallocation of Objects
  • Dynamic and Static Binding
  • Nested Classes
  • Initialization of Objects

Next topic is about Support for Object-Oriented Programming in C++ . C++ was the first widely used object-oriented programming language, and is still among the most popular. So, naturally, it is the one with which other languages are often compared. We learned that All C++ objects must be initialized before they are used. Therefore, all C++classes include at least one constructor method that initializes the data members of the new object. Constructor methods are implicitly called when an object is created. If a class has a parent, the inherited data members must be initialized when the subclass object is created. To do this, the parent constructor is implicitly called. A C++ object could be manipulated through a value variable, rather than a pointer or a reference. (Such an object would be static or stack dynamic.) However, in that case, the object’s type is known and static, so dynamic binding is not needed. Also C++ does not allow value variables (as opposed to pointers or references) to be polymorphic. When a polymorphic variable is used to call a member function overridden in one of the derived classes, the call must be dynamically bound to the correct member function definition. And the last topic is Implementation of Object-Oriented Constructs.In C++, classes are defined as extensions of C’s record structures—structs. This similarity suggests a storage structure for the instance variables of class instances—that of a record. This form of this structure is called a class instance record (CIR). The structure of a CIR is static, so it is built at compile time and used as a template for the creation of the data of class instances. Also, methods in a class that are statically bound need not be involved in the CIR for the class. However, methods that will be dynamically bound must have entries in this structure. Such entries could simply have a pointer to the code of the method, which must be set at object creation time. Calls to a method could then be connected to the corresponding code through this pointer in the CIR

Posted in Uncategorized | Leave a comment

PLC Session 8

In this chapter we will talk about Abstract Data Types. This time it was presented by our friend’s group. This session is divided into some topics that as we can see below.

  • The Concept of Abstraction
  • Introduction to Data Abstraction
  • Language Examples
  • Parameterized Abstract Data Types
  • Encapsulation Constructs
  • Naming Encapsulations

The first topic is about The Concept of Abstraction. In this topic we learned that An abstraction is a view or representation of an entity that includes only the most significant attributes. In a general sense, abstraction allows one to collect instances of entities into groups in which their common attributes need not be considered. In the world of programming languages, abstraction is a weapon against the complexity of programming; its purpose is to simplify the programming process. It is an effective weapon because it allows programmers to focus on essential attributes, while ignoring subordinate attributes. Let’s move on to the next topic about Introduction to Data Abstraction, Syntactically, an abstract data type is an enclosure that includes only the data representation of one specific data type and the subprograms that provide the operations for that type. Through access controls, unnecessary details of the type can be hidden from units outside the enclosure that use the type. Program units that use an abstract data type can declare variables of that type, even though the actual representation is hidden from them. An instance of an abstract data type is called an object . We learned that Object-oriented programming is an outgrowth of the use of data abstraction in software development, and data abstraction is one of its fundamental components. Next, we will discuss the language examples, C++, which was first released in 1985, was created by adding features to C. The first important additions were those to support object-oriented programming. Because one of the primary components of object-oriented programming is abstract data types, C++ obviously is required to support them. While Ada provides an encapsulation that can be used to simulate abstract data types, C++ provides two constructs that are very similar to each other, the class and the struct, which more directly support abstract data types. Because structs are most commonly used when only data is included, we do not discuss them further here

Next topic is about Parameterized Abstract Data Types.We learned that It is often convenient to be able to parameterize abstract data types. For example, we should be able to design a stack abstract data type that can store any scalar type elements rather than be required to write a separate stack abstraction for every different scalar type. Note that this is only an issue for static typed languages. In a dynamic typed language like Ruby, any stack implicitly can store any type elements. Next topic is about Encapsulation Constructs, C does not provide complete support for abstract data types, although both abstract data types and multiple-type encapsulations can be simulated. In C, a collection of related functions and data definitions can be placed in a file, which can be independently compiled. Such a file, which acts as a library, has an implementation of its entities. The interface to such a file, including data, type, and function declarations, is placed in a separate file called a header file. Type representations can be hidden by declaring them in the header file as pointers to struct types. Move to next topic there’s Naming Encapsulations. Naming encapsulations define name scopes that assist in avoiding these name conflicts. Each library can create its own naming encapsulation to prevent its names from conflicting with the names defined in other libraries or in client code. Each logical part of a software system can create a naming encapsulation with the same purpose. Naming encapsulations are logical encapsulations, in the sense that they need not be contiguous. Several different collections of code can be placed in the same namespace, even though they are stored in different places.

Posted in Uncategorized | Leave a comment

PLC Session 7

In this seventh session we discussed about Subprogram. This actually should be our group’s turn to present the material. But due to the time was before mid term test our group didn’t have the chance to give presentation, so our group presentation delayed for the sake of mid term semester test. Okay let’s talk about subprogram, this session divided into bunch of topics that shown below

  • Introduction
  • Fundamentals of Subprograms
  • Local Referencing Environments
  • Parameter-Passing Methods
  • Parameters That Are Subprograms
  • Calling Subprograms Indirectly
  • Overloaded Subprograms
  • Generic Subprograms
  • User-Defined Overloaded Operators
  • Closures
  • Coroutines

Without further ado I will discuss the first topic which is introduction.Two fundamental abstraction facilities can be included in a programming language : process abstraction and data abstraction. In the early history of high level programming languages, only process abstraction was included. Process abstraction, in the form of subprograms, has been a central concept in all programming languages. In the 1980’s, however, many people began to believe that data abstraction was equally important. Next topic is about Fundamentals of Subprograms. Subprogram have the following characteristics. Each subprogram has a single entry point. The calling program unit is suspended during the execution of the called subprogram, which implies that there is only one subprogram in execution at any given time. And the third is, control always returns to the caller when the subprogram execution terminates.A subprogram definition describes the interface to and the actions of the subprogram abstraction. A subprogram call is the explicit request that a specific subprogram be executed. A subprogram is said to be active if, after having been called, it has begun execution but has not yet completed that execution. A subprogram header, which is the first part of the definition, serves several purposes. First, it specifies that the following syntactic unit is a subprogram definition of some particular kind. Second, if the subprogram is not anonymous, the header provides a name for the subprogram. Third, it may optionally specify a list of parameters.In C, the header of a function named adder might be as shown below:

void adder (parameters)

The reserved word void in this header indicates that the subprogram does not return a value. We also learned that Subprograms can have declarations as well as definitions. This form parallels the variable declarations and definitions in C, in which the declarations can be used to provide type information but not to define variables. Subprogram declarations provide the subprogram’s protocol but do not include their bodies. Function declarations are common in C and C++ programs, where they are called prototypes. Such declarations are often placed in header files.In most other languages (other than C and C++), subprograms do not need declarations, because there is no requirement that subprograms be defined before they are called.There are two ways that a nonmethod subprogram can gain access to the data that it is to process: through direct access to nonlocal variables or through parameter passing. Parameter passing is more flexible than direct access to nonlocal variables. In essence, a subprogram with parameter access to the data that it is to process is a parameterized computation. It can perform its computation on whatever data it receives through its parameters. Let’s move on to the next topic about Local Referencing Environments.

In this topic we learned that Subprograms can define their own variables, thereby defining local referencing environments. Variables that are defined inside subprograms are called local variables, because their scope is usually the body of the subprogram in which they are defined. In most contemporary languages, local variables in a subprogram are by default stack dynamic. In C and C++ functions, locals are stack dynamic unless specifically declared to be static. The next topic is about Parameter-Passing Methods , Parameter-passing methods are the ways in which parameters are transmitted to and/or from called subprograms formal parameters are characterized by one of three distinct semantics models: (1) They can receive data from the corresponding actual parameter; (2) they can transmit data to the actual parameter; or (3) they can do both. These models are called in mode, out mode, and inout mode, respectively. There are ways to do paramater passing we will discuss some of them here. When a parameter is passed by value, the value of the actual parameter is used to initialize the corresponding formal parameter, which then acts as a local variable in the subprogram, thus implementing in-mode semantics. Pass-by-value is normally implemented by copy, because accesses often are more efficient with this approach. It could be implemented by transmitting an access path to the value of the actual parameter in the caller, but that would require that the value be in a write-protected cell. We learned that The advantage of pass-by-value is that for scalars it is fast, in both linkage cost and access time. The main disadvantage of the pass-by-value method if copies are used is that additional storage is required for the formal parameter, either in the called subprogram or in some area outside both the caller and the called subprogram.

Pass-by-result is an implementation model for out-mode parameters. When a parameter is passed by result, no value is transmitted to the subprogram. The corresponding formal parameter acts as a local variable, but just before control is transferred back to the caller, its value is transmitted back to the caller’s actual parameter, which obviously must be a variable. Pass-by-reference is an  implementation model for inout-mode parameters. Rather than copying data values back and forth, however, as in pass-by value-result, the pass-by-reference method transmits an access path, usually just an address, to the called subprogram. This provides the access path to the cell storing the actual parameter. Thus, the called subprogram is allowed to access the actual parameter in the calling program unit. In effect, the actual parameter is shared with the called subprogram. There are actually some other way to pass parameter but we only covers the three above for the sake of simplicity. Next topic is about Parameters That Are Subprograms.

There are three choices:
• The environment of the call statement that enacts the passed subprogram
(shallow binding)
• The environment of the definition of the passed subprogram (deep
binding)
• The environment of the call statement that passed the subprogram as an
actual parameter (ad hoc binding)

Move on to next topic is Calling Subprograms Indirectly. There are situations in which subprograms must be called indirectly. These most often occur when the specific subprogram to be called is not known until run time. The call to the subprogram is made through a pointer or reference to the subprogram, which has been set during execution before the call is made. The two most common applications of indirect subprogram calls are for event handling in graphical user interfaces, which are now part of nearly all Web applications, as well as many non-Web applications, and for callbacks, in which a subprogram is called and instructed to notify the caller when the called subprogram has completed its work. It is Overloaded Subprograms as the next topic, An overloaded subprogram is a subprogram that has the same name as another subprogram in the same referencing environment. Every version of an overloaded subprogram must have a unique protocol. Next topic is about Generic Subprograms, we learned A polymorphic subprogram takes parameters of different types on different activations. Overloaded subprograms provide a particular kind of polymorphism called ad hoc polymorphism. Overloaded subprograms need not behave similarly. Move on to the next topic there is User-Defined Overloaded Operators, Operators can be overloaded by the user in Ada, C++, Python, and Ruby. Suppose that a Python class is developed to support complex numbers and arithmetic operations on them. Next topic is about Closures, a closure is a subprogram and the referencing environment where it was defined. The referencing environment is needed if the subprogram can be called from any arbitrary place in the program. Last but not least  there are Coroutines A coroutine is a special kind of subprogram. Rather than the master-slave relationship between a caller and a called subprogram that exists with conventional subprograms, caller and called coroutines are more equitable. In fact, the coroutine control mechanism is often called the symmetric unit control model. Coroutines can have multiple entry points, which are controlled by the coroutines themselves.

Posted in Uncategorized | Leave a comment

PLC Session 6

In this Session we talked about Statement-Level Control Structures. As usual it was presented by our friend’s group. this session are divided into two topics

  •  Selection Statements
  • Iterative Statements

At least two additional linguistic mechanisms are necessary to make the computations in programs flexible and powerful: some means of selecting among alternative control flow paths (of statement execution) and some means of causing the repeated execution of statements or sequences of statements. Statements that provide these kinds of capabilities are called control statements .Whereas a control structure is a control statement and the collection of statements whose execution it controls. Now we move to our main topic called Selection Statements. A selection statement provides the means of choosing between two or more execution paths in a program. Selection statements divided into two general categories: two-way and n-way, or multiple selection.

Although the two-way selection statements of contemporary imperative languages are quite similar, there are some variations in their designs. The general form of a two-way selector is as shown below :

if control_expression
then clause
else clause

Control expressions are specified in parentheses if the then reserved word  is not used to introduce the then clause. We also learned that In many contemporary languages, the then and else clauses appear as either single statements or compound statements. Python uses indentation to specify compound statements. For example,

if x > y :
x = y
print “case 1”

All statements equally indented are included in the compound statement. Notice that rather than then, a colon is used to introduce the then clause in Python. Let’s move on to next topic Multiple-Selection Statements, The C multiple-selector statement, switch, which is also part of C++, Java, and JavaScript, is a relatively primitive design. Its general form is

switch (expression) {
case constant_expression1:statement1;
. . .
case constantn: statement_n;
[default: statement n+1]
}

where the control expression and the constant expressions are some discrete type. This includes integer types, as well as characters and enumeration types. The selectable statements can be statement sequences, compound statements, or blocks.To alleviate the poor readability of deeply nested two-way selectors, some languages, such as Perl and Python, have been extended specifically for this use. The extension allows some of the special words to be left out. In particular, else-if sequences are replaced with a single special word, and the closing special word on the nested if is dropped. The nested selector is then called an else-if clause. That’s the end of session, thankyou.

Posted in Uncategorized | Leave a comment

PLC Session 5

The fifth Session of semester 1 is talking about Expression and Assignment Statements this Session will be presented by our friend’s group. As usual, this session divided into some topics that shown below

  • Introduction
  • Arithmetic Expressions
  • Overloaded Operators
  • Type Conversions
  • Relational and Boolean Expressions
  • Short-Circuit Evaluation
  • Assignment Statements
  • Mixed-Mode Assignment

Without further ado i will sum up what we discussed on that day. The first topic is about introduction as usual. It explain to us what are Expressions, Expressions are the fundamental means of specifying computations in a programming language. It is crucial for a programmer to understand both the syntax and semantics of expressions of the language being used.  In this fifth session, the semantics of expressions from session 2 are discussed.The essence of the imperative programming languages is the dominant role of assignment statements. The purpose of these statements is to cause the side effect of changing the values of variables, or the state, of the program. Move on to the next topic, we were discussing about Arithmetic Expressions,  In programming languages, arithmetic expressions consist of operators, operands, parentheses, and function calls. An operator can be unary, meaning it has a single operand, binary, meaning it has two operands, or ternary, meaning it has three operands. We also discussed about overloaded operators,  arithmetic operators are often used for more than one purpose. For example, + usually is used to specify integer addition and floating-point addition. Some languages—Java, for example—also use it for string catenation. This multiple use of an operator is called operator overloading and is generally thought to be acceptable, as long as neither readability nor reliability suffers. Next topic is about  type conversions

Type conversions are either narrowing or widening. A narrowing conversion converts a value to a type that cannot store even approximations of all of the values of the original type. For example, converting a double to a float in Java is a narrowing conversion, because the range of double is much larger than that of float. A widening conversion converts a value to a type that can include at least approximations of all of the values of the original type. For example, converting an int to a float in Java is a widening conversion. We learned that a number of errors can occur during expression evaluation. If the language requires type checking, either static or dynamic, then operand type errors cannot occur. Next topic is about Relational and Boolean Expressions. Let’s discuss what is relational expressions, A relational operator is an operator that compares the values of its two operands. A relational expression has two operands and one relational operator. The value of a relational expression is Boolean, except when Boolean is not a type included in the language. The relational operators are often overloaded for a variety of types. The syntax of the relational operators for equality and inequality differs among some programming languages. For example, for inequality, the C-based languages use !=, Ada uses /=, Lua uses ~=, Fortran 95+ uses .NE. or <>, and ML and F# use <>. JavaScript and PHP have two additional relational operators, === and !==. These are similar to their relatives, == and !=, but prevent their operands from being coerced. Next we talked about Boolean Expressions. Boolean expressions consist of Boolean variables, Boolean constants, relational expressions, and Boolean operators. The operators usually include those for the AND, OR, and NOT operations, and sometimes for exclusive OR and equivalence. Another topic is about Short-Circuit Evaluation. A short-circuit evaluation of an expression is one in which the result is determined without evaluating all of the operands and/or operators. For example, the value of the arithmetic expression

(13 * a) * (b / 13 – 1)

is independent of the value of (b / 13 – 1) if a is 0, because 0 * x = 0 for any x. So, when a is 0, there is no need to evaluate (b / 13 – 1) or perform the second multiplication. Move on to next topic is about Assignment Statements, Assignment Statements provides the mechanism by which the user can dynamically change the bindings of values to variables. ALGOL 60 pioneered the use of := as the assignment operator, which avoids the confusion of assignment with equality. Ada also uses this assignment operator. The design choices of how assignments are used in a language have varied widely. We also learned that a compound assignment operator is a shorthand method of specifying a  commonly needed form of assignment. Compound assignment operators were introduced by ALGOL 68, example of compound assignment is shown below

sum += value;

is equivalent to

sum = sum + value;

The last topic is about Mixed-Mode Assignment. One of the design decisions concerning arithmetic expressions is whether an operator can have operands of different types. Languages that allow such expressions are called mixed-mode expressions. All in all Expressions consist of constants, variables, parentheses, function calls, and operators. Assignment statements include target variables, assignment operators, and expressions.

Posted in Uncategorized | Leave a comment

PLC Session 4

In this fourth session we talked about data type. As usual there are two groups that will give the presentation to the whole class. In this session we broke it down into bunch of topics

  • Introduction
  • Primitive Data Types
  • Character String Types
  • User-Defined Ordinal Types
  • Array Types
  • Associative Arrays
  • Record Types
  • Tuple Types
  • List Types
  • Union Types
  • Pointer and Reference Types
  • Type Checking
  • Strong Typing

Our friend’s group opens the session with introduction. They introduce us what is data type. A data type defines a collection of data values and a set of predefined operations on those values. Computer programs produce results by manipulating data.  A descriptor is the collection of the attributes of variable. The next topic is about Primitive Data Types, data types that are not defined in terms of other types are called primitive data types. Nearly all programming languages provide a set of primitive data types. The primitive data types consist of  numeric types, boolean types, character types. Which numeric types consist of integer, floating-point, complex, and decimal. Next topic is about  Character String Types, A character string type is one in which the values consist of sequences of characters. Character string constants are used to label output, and the input and output of all kinds of data are often done in terms of strings. Strings have their function or operations, the most common string operations are assignment, catenation, substring reference, comparison, and pattern matching. All in all String types have important impact to the writability of a language. Next topic is about User-Defined Ordinal Types.

An ordinal type is one in which the range of possible values can be easily associated with the set of positive integers. In Java, for example, the primitive ordinal types are integer, char, and boolean. There are two user-defined ordinal types that have been supported by programming languages: enumeration and subrange.An enumeration type is one in which all of the possible values, which are named constants, are provided, or enumerated, in the definition. Enumeration types provide a way of defining and grouping collections of named constants, which are called enumeration constants. The definition of a typical enumeration type is shown in the following C# example:

enum days {Mon, Tue, Wed, Thu, Fri, Sat, Sun};

Whereas a subrange type is a contiguous subsequence of an ordinal type. For example, 12..14 is a subrange of integer type. Subrange types were introduced by Pascal and are included in Ada. Enumeration types are usually implemented as integers. Next topic is about Array Types. An array is a homogeneous aggregate of data elements in which an individual element is identified by its position in the aggregate, relative to the first element. The individual data elements of an array are of the same type. For example if an array is integer the whole array would contains integer type. This is called homogeneous array, whereas a heterogeneous array is when an object can be of any types like in Phyton. The most common array operations are assignment, catenation, comparison for equality and inequality, and slices. The C-based languages do not provide any array operations, except through the methods of Java, C++, and C#. Perl supports array assignments but does not support comparisons. The next topic is about associative arrays. An associative array is an unordered collection of data elements that are indexed by an equal number of values called keys. In the case of non-associative arrays, the indices never need to be stored (because of their regularity). In an associative array, however, the user-defined keys must be stored in the structure. So each element of an associative array is in fact a pair of entities, a key and a value.

Moving to other topic that discussed about Record Types. A record is an aggregate of data elements in which the individual elements are identified by names and accessed through offsets from the beginning of the structure. The fields of records are stored in adjacent memory locations. Next topic is about Tuple Types, a tuple is a data type that is similar to a record, except that the elements are not named. Another topic is about List Types. Lists in Scheme and Common LISP are delimited by parentheses and the elements are not separated by any punctuation. For example, (A B C D)

Nested lists have the same form, so we could have (A (B C) D) In this list, (B C) is a list nested inside the outer list. There’s also Union Types, A union is a type whose variables may store different type values at different times during program execution. C and C++ provide union constructs in which there is no language support for type checking. Unions are implemented by simply using the same address for every possible variant. Sufficient storage for the largest variant is allocated. The tag of a discriminated union is stored with the variant in a recordlike structure. At compile time, the complete description of each variant must be stored. This can be done by associating a case table with the tag entry in the descriptor. The case table has an entry for each variant, which points to a descriptor for that particular variant. Not to forget there’s  Pointer and Reference Types.A pointer type is one in which the variables have a range of values that consists of memory addresses and a special value, nil. The value nil is not a valid address and is used to indicate that a pointer cannot currently be used to reference a memory cell. Pointers are designed for two distinct kinds of uses. First, pointers provide some of the power of indirect addressing, which is frequently used in assembly language programming. Second, pointers provide a way to manage dynamic storage. A pointer can be used to access a location in an area where storage is dynamically allocated called a heap, variables that are dynamically allocated from the heap are called heap dynamic variables. Languages that provide a pointer type usually include two fundamental pointer operations: assignment and dereferencing. The first operation sets a pointer variable’s value to some useful address. Dereferencing, is the second fundamental pointer operation, which takes a reference through one level of indirection. The pointer has some problem one of them is a dangling pointer, or dangling reference, is a pointer that contains the address of a heap-dynamic variable that has been deallocated. Dangling pointers are dangerous for several reasons. First, the location being pointed to may have been reallocated to some new heap-dynamic variable. If the new variable is not the same type as the old one, type checks of uses of the dangling pointer are invalid. Even if the new dynamic variable is the same type, its new value will have no relationship to the old pointer’s dereferenced value.

There’s also reference type in this topic, a reference type variable is similar to a pointer, with one important and fundamental difference: A pointer refers to an address in memory, while a reference refers to an object or a value in memory. Move on to the next topic there is type checking.Type checking is the activity of ensuring that the operands of an operator are of compatible types. A compatible type is one that either is legal for the operator or is allowed under language rules to be implicitly converted by compiler-generated code (or the interpreter) to a legal type. This automatic conversion is called a coercion. For example, if an int variable and a float variable are added in Java, the value of the int variable is coerced to float and a floating-point add is done. A type error is the application of an operator to an operand of an inappropriate type. If all bindings of variables to types are static in a language, then type checking can nearly always be done statically. Dynamic type binding requires type checking at run time, which is called dynamic type checking. Last but not least, there is Strong Typing. A programming language is strongly typed if type errors are always detected. This requires that the types of all operands can be determined, either at compile time or at run time. C and C++ are not strongly typed languages because both include union types, which are not type checked.

Posted in Uncategorized | Leave a comment