|| NOTICE BOARD ||

* * * THIS BLOG CAN BE BEST VIEWED ON GOOGLE CHROME * * * Quitters never win Winners never quit * * *

Sunday, February 15, 2009

PROGRAMMING PARADIGM

PROGRAMMING PARADIGM :
A programming paradigm is a fundamental style of computer programming. (Compare with a methodology, which is a style of solving specific software engineering problems). Paradigms differ in the concepts and abstractions used to represent the elements of a program (such as objects, functions, variables, constraints, etc.) and the steps that compose a computation (assignation, evaluation, continuations, data flows, etc.).
Types of Programming Paradigm
AGENT ORIENTED
In computer science, a software agent is a piece of software that acts for a user or other program in a relationship of agency. Such "action on behalf of" implies the authority to decide which (and if) action is appropriate. The idea is that agents are not strictly invoked for a task, but activate themselves.
COMPONENT-BASED
Component-based software engineering (CBSE) (also known as Component-Based Development (CBD) or Software Componentry) is a branch of the software engineering discipline, with emphasis on decomposition of the engineered systems into functional or logical components with well-defined interfaces used for communication across the components.
Components are considered to be a higher level of abstraction than objects and as such they do not share state and communicate by exchanging messages carrying data.
CONCATENATIVE
The concatenative or stack-based programming languages are ones in which the concatenation of two pieces of code expresses the composition of the functions they express. These languages use a stack to store the arguments and return values of operations.
The most widespread concatenative language is the page description language PostScript, a limited subset of which is used in PDF. However, PostScript codes are usually generated by programs written in other languages. Other well-known concatenative languages include Forth[1][2], and the RPL used on Hewlett-Packard HP-28 and HP-48 scientific calculators.
CONCURRENT COMPUTING
Concurrent computing is a form of computing in which programs are designed as collections of interacting computational processes that may be executed in parallel.[1] Concurrent programs can be executed sequentially on a single processor by interleaving the execution steps of each computational process, or executed in parallel by assigning each computational process to one of a set of processors that may be in close proximity or distributed across a network. The main challenges in designing concurrent programs are ensuring the correct sequencing of the interactions or communications between different computational processes, and coordinating access to resources that are shared between processes.[1] A number of different methods can be used to implement concurrent programs, such as implementing each computational process as an operating system process, or implementing the computational processes as a set of threads within a single operating system process.
DECLARATIVE
In computer science, declarative programming is a programming paradigm that expresses the logic of a computation without describing its control flow. It attempts to minimize or eliminate side effects by describing what the program should accomplish, rather than describing how to go about accomplishing it. This is in contrast from imperative programming, which requires a detailed description of the algorithm to be run.
EVENT-DRIVEN
In computer programming, event-driven programming or event-based programming is a programming paradigm in which the flow of the program is determined by events — i.e., sensor outputs or user actions (mouse clicks, key presses) or messages from other programs or threads.
Event-driven programming can also be defined as an application architecture technique in which the application has a main loop which is clearly divided down to two sections: the first is event selection (or event detection), and the second is event handling. In embedded systems the same may be achieved using interrupts instead of a constantly running main loop; in that case the former portion of the architecture resides completely in hardware.
Event-driven programs can be written in any language, although the task is easier in languages that provide high-level abstractions, such as closures. Some integrated development environments provide code generation assistants that automate the most repetitive tasks required for event handling.
FEATURE ORIENTED PROGRAMMING
Feature Oriented Programming (FOP) or Feature Oriented Software Development (FOSD) is a general paradigm for program synthesis in software product lines.
FOSD arose out of layer-based designs of network protocols and extensible database systems in the late-1980s [1]. A program was defined as a stack of layers. Each layer added functionality to previously composed layers and different compositions of layers produced different programs. Not surprisingly, there was a need for a compact language to express such designs. Elementary algebra fit the bill: each layer was function that added new code to an existing program to produce a new program, and a program's design was modeled by an expression, i.e., a composition of functions (layers). The figure to the right illustrates the stacking of layers h, j, and i (where h is on the bottom and i is on the top). The algebraic notations i(j(h))and i•j•h express these designs.
Over time, the idea of layers was generalized to features, where a feature is an increment in program development or functionality. The paradigm for program design and synthesis was recognized to be a generalization of relational query optimization, where query evaluation programs were defined as relational algebra expressions, and query optimization was expression evaluation [2].A software product line (SPL) is a family of programs where each program is defined by a unique composition of features, and no two programs have the same combination of features. FOSD has since evolved into the study of feature modularity, tools, analyses, and design techniques to support feature-based program synthesis.

FUNCTION-LEVEL PROGRAM
In computer science, function-level programming refers to one of the two contrasting programming paradigms identified by John Backus in his work on programs as mathematical objects, the other being value-level programming.
In his 1977 Turing award lecture, Backus set forth what he considered to be the need to switch to a different philosophy in programming language design:
"Programming languages appear to be in trouble. Each successive language incorporates, with a little cleaning up, all the features of its predecessors plus a few more. [...] Each new language claims new and fashionable features... but the plain fact is that few languages make programming sufficiently cheaper or more reliable to justify the cost of producing and learning to use them."
A function-level program is variable-free, since program variables, which are essential in value-level definitions, are not needed in function-level ones.
In the function-level style of programming, a program is built directly from programs that are given at the outset, by combining them with program-forming operations or functionals. Thus, in contrast with the value-level approach that applies the given programs to values to form a succession of values culminating in the desired result value, the function-level approach applies program-forming operations to the given programs to form a succession of programs culminating in the desired result program.
IMPERATIVE PROGRAMMING
In computer science, imperative programming is a programming paradigm that describes computation in terms of statements that change a program state. In much the same way as the imperative mood in natural languages expresses commands to take action, imperative programs define sequences of commands for the computer to perform.
The term is used in opposition to declarative programming, which expresses what needs to be done, without prescribing how to do it in terms of sequences of actions to be taken. Functional and logical programming are examples of a more declarative approach.
ITERATIVE
Iteration in computing is the repetition of a process within a computer program. It can be used both as a general term, synonymous with repetition, and to describe a specific form of repetition with a mutable state.
When used in the first sense, recursion is an example of iteration, but typically using a recursive notation, which is typically not the case for iteration.
However, when used in the second (more restricted) sense, iteration describes the style of programming used in imperative programming languages. This contrasts with recursion, which has a more declarative approach.
METAPROGRAMMING
Metaprogramming is the writing of computer programs that write or manipulate other programs (or themselves) as their data, or that do part of the work at runtime that would otherwise be done at compile time. In many cases, this allows programmers to get more done in the same amount of time as they would take to write all the code manually, or it gives programs greater flexibility to efficiently handle new situations without recompilation.
The language in which the metaprogram is written is called the metalanguage. The language of the programs that are manipulated is called the object language. The ability of a programming language to be its own metalanguage is called reflection or reflexivity.
MODULAR PROGRAMMING
Modular programming is a software design technique that increases the extent to which software is composed from separate parts, called modules. Conceptually, modules represent a separation of concerns, and improve maintainability by enforcing logical boundaries between components. Modules are typically incorporated into the program through interfaces. A module interface expresses the elements that are provided and required by the module. The elements defined in the interface are visible to other modules. The implementation contains the working code that corresponds to the elements declared in the interface.
Languages that formally support the module concept include Ada, D, F, Fortran, Haskell, Pascal (some derivatives), ML, Modula-2, Erlang, Perl, Python and Ruby. The IBM System i (aka AS/400 and iSeries) also uses Modules in RPG, COBOL and CL, when programming in the ILE environment.
NONDETERMINISTIC
A nondeterministic programming language is a language which can specify, at certain points in the program (called "choice points"), various alternatives for program flow. Unlike an if-then statement, the method of choice between these alternatives is not directly specified by the programmer; the program must decide at runtime between the alternatives, via some general method applied to all choice points. A programmer specifies a limited number of alternatives, but the program must later choose between them. ("Choose" is, in fact, a typical name for the nondeterministic operator.) A hierarchy of choice points may be formed, with higher-level choices leading to branches that contain lower-level choices within them.
PARALLEL COMPUTING
Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-level-, instruction-level-, data-, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors.
Parallel computer programs are more difficult to write than sequential ones,[5] because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks is typically one of the greatest barriers to getting good parallel program performance. The speed-up of a program as a result of parallelization is given by Amdahl's law.
PROGRAMMING IN THE LARGE AND PROGRAMMING IN THE SMALL
In software development, programming in the large can involve programming by larger groups of people or by smaller groups over longer time periods[dubious – discuss]. Either of these conditions will result in large, and hence complicated, programs that can be challenging for maintainers to understand.
With programming in the large, coding managers place emphasis on partitioning work into modules with precisely-specified interactions. This requires careful planning and careful documentation.
With programming in the large, program changes can become difficult. If a change operates across module boundaries, the work of many people may need re-doing. Because of this, one goal of programming in the large involves setting up modules that will not need altering in the event of probable changes.
Programming in the large requires abstraction-creating skills. Until a module becomes implemented it remains an abstraction. Taken together, the abstractions should create an architecture unlikely to need change. They should define interactions that have precision and demonstrable correctness.
Programming in the large requires management skills. The process of building abstractions aims not just to describe something that can work but also to direct the efforts of people who will make it work.
The concept was introduced by Frank DeRemer and Hans Kron in their 1976 paper "Programming-in-the-Large Versus Programming-in-the-Small", IEEE Trans. on Soft. Eng. 2(2).
In computer science terms, programming in the large can refer to programming code that represents the high-level state transition logic of a system. This logic encodes information such as when to wait for messages, when to send messages, when to compensate for failed non-ACID transactions, etc. Programming in the small, in contrast, deals with short-lived programmatic behavior. often executed as a single ACID transaction and which allows access to local logic and resources such as files, databases, etc.
PROGRAMMING IN THE SMALL
In computer science terms, programming in the small deals with short-lived programmatic behavior, often executed as a single ACID transaction and which allows access to local logic and resources such as files, databases, etc.
In contrast, programming in the large can refer to programming code that represents the high-level state transition logic of a system. This logic encodes information such as when to wait for messages, when to send messages, when to compensate for failed non-ACID transactions, etc.
The concept was introduced by Frank DeRemer and Hans Kron in their 1975 paper "Programming-in-the-Large Versus Programming-in-the-Small", ACM Intl. Conf Reliable Software
RECURSIVE
Recursion (computer science) is a way of thinking about and solving problems. In fact, recursion is one of the central ideas of computer science. [1] Solving a problem using recursion means the solution depends on solutions to smaller instances of the same problem. [2]
"The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions." [3]
Most high-level computer programming languages support recursion by allowing a function to call itself within the program text. Imperative languages define looping constructs like “while” and “for” loops that are used to perform repetitive actions. Some functional programming languages do not define any looping constructs but rely solely on recursion to repeatedly call code. Computability theory has proven that these recursive only languages are mathematically equivalent to the imperative languages, meaning they can solve the same kinds of problems even without the typical control structures like “while” and “for”
TREE
Tree programming refers to the use of a programming language to analyze data trees, in a way unique from conventional programming languages.[clarification needed] This should not be confused with list-based programming languages like Lisp and Scheme
VALUE-LEVEL
Value-level programming refers to one of the two contrasting programming paradigms identified by John Backus in his work on Programs as mathematical objects, the other being Function-level programming. Backus originally used the term Object-level programming but that term is now prone to confusion with Object-oriented programming.
Value-level programs are those that describe how to combine various values (i.e., numbers, symbols, strings, etc.) to form other values until the final result values are obtained. New values are constructed from existing ones by the application of various value-to-value functions, such as addition, concatenation, matrix inversion, and so on.
Conventional, von Neumann programs are value-level: expressions on the right side of assignment statements are exclusively concerned with building a value that is then to be stored

0 Comments:

इस ब्लाग की सामग्री को यथाशुद्ध प्रस्तुत करने का प्रयास किया गया है फिर भी किसी भी त्रुटि के लिए प्रकाशक जिम्मेदार नहीं होगा