"O GOD THEE I PRAY INCREASE MY KNOWLEDGE DAY BY DAY"

.

For Success

For Success
Know more than other Work more than other But, Expect less than other

Its a necessary and sufficient condition-----

Its a necessary and sufficient condition-----
"If you win, you need not have to explain.........But if you lose, you should not be there to explain!"

11 April 2011

Nural network AI (courtesy google)

What is a Neural Network?
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well.
The term neural network was traditionally used to refer to a network or circuit of biological neurons.[1] The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Thus the term has two distinct usages:
1. Biological neural networks are made up of real biological neurons that are connected or functionally related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.
2. Artificial neural networks are composed of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex: artificial neural network algorithms attempt to abstract this complexity and focus on what may hypothetically matter most from an information processing point of view. Good performance (e.g. as measured by good predictive ability, low generalization error), or performance mimicking animal or human error patterns, can then be used as one source of evidence towards supporting the hypothesis that the abstraction really captured something important from the point of view of information processing in the brain. Another incentive for these abstractions is to reduce the amount of computation required to simulate artificial neural networks, so as to allow one to experiment with larger networks and train them on larger data sets.
Why use neural networks?
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyse. This expert can then be used to provide projections given new situations of interest and answer "what if" questions.
Other advantages include:
1. Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
2. Self-Organisation: An ANN can create its own organisation or representation of the information it receives during learning time.
3. Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
4. Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.

The brain, neural networks and computers
Neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated, as little is known about how the brain actually works
A subject of current research in theoretical neuroscience is the question surrounding the degree of complexity and the properties that individual neural elements should have to reproduce something resembling animal intelligence.
Historically, computers evolved from the von Neumann architecture, which is based on sequential processing and execution of explicit instructions. On the other hand, the origins of neural networks are based on efforts to model information processing in biological systems, which may rely largely on parallel processing as well as implicit instructions based on recognition of patterns of 'sensory' input from external sources. In other words, at its very heart a neural network is a complex statistical processor (as opposed to being tasked to sequentially process and execute).
Neural coding is concerned with how sensory and other information is represented in the brain by neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among electrical activity of the neurons in the ensemble.It is thought that neurons can encode both digital and analog information.
Human and Artificial Neurones - investigating the similarities
2.1 How the Human Brain Learns?
Much is still unknown about how the brain trains itself to process information, so theories abound. In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches. At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity from the axon into electrical effects that inhibit or excite activity in the connected neurones. When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon. Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes.


Components of a neuron The synapse


2.2 From Human Neurones to Artificial Neurones
We conduct these neural networks by first trying to deduce the essential features of neurones and their interconnections. We then typically program a computer to simulate these features. However because our knowledge of neurones is incomplete and our computing power is limited, our models are necessarily gross idealisations of real networks of neurones.
The neuron mode
Neural networks and artificial intelligence
A neural network (NN), in the case of artificial neurons called artificial neural network (ANN) or simulated neural network (SNN), is an interconnected group of natural or artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.
In more practical terms neural networks are non-linear statistical data modeling or decision making tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.
However, the paradigm of neural networks - i.e., implicit, not explicit , learning is stressed - seems more to correspond to some kind of natural intelligence than to the traditional symbol-based Artificial Intelligence, which would stress, instead, rule-based learning
Applications of natural and of artificial neural networks
The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations and also to use it. Unsupervised neural networks can also be used to learn representations of the input that capture the salient characteristics of the input distribution, e.g., see the Boltzmann machine (1983), and more recently, deep learning algorithms, which can implicitly learn the distribution function of the observed data. Learning in neural networks is particularly useful in applications where the complexity of the data or task makes the design of such functions by hand impractical.
The tasks to which artificial neural networks are applied tend to fall within the following broad categories:
• Function approximation, or regression analysis, including time series prediction and modeling.
• Classification, including pattern and sequence recognition, novelty detection and sequential decision making.
• Data processing, including filtering, clustering, blind signal separation and compression.
Application areas of ANNs include system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition, etc.), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering
Applications
There are abundant materials, tutorials, references and disparate list of demos on the net. This work attempts to compile a list of applications and demos - those that comes with video clips.
The applications featured here are:
• CoEvolution of Neural Networks for Control of Pursuit & Evasion
• Learning the Distribution of Object Trajectories for Event Recognition
• Radiosity for Virtual Reality Systems
• Autonomous Walker & Swimming Eel
• Robocup: Robot World Cup
• Using HMM's for Audio-to-Visual Conversion
• Artificial Life: Galapagos
• Speechreading (Lipreading)
• Detection and Tracking of Moving Targets
• Real-time Target Identification for Security Applications
• Facial Animation
• Behavioral Animation and Evolution of Behavior
• A Three Layer Feedforward Neural Network
• Artificial Life for Graphics, Animation, Multimedia, and Virtual Reality: Siggraph '95 Showcase
• Creatures: The World Most Advanced Artificial Life!
• Framsticks Artificial Life n
Not all types of artificial intelligence are able to learn new ways of performing tasks. Expert systems are a variety of AI that is very different from those that use machine learning techniques. Instead of learning through training or even unsupervised learning, an expert system applies logical arguments to information provided as part of a knowledge base. Examples of expert systems include loan application evaluation systems and technical support systems.
Expert System Definitionn
expert system is software that uses a knowledge base of human expertise for problem solving, or clarify uncertainties where normally one or more human experts would need to be consulted. Expert systems are most common in a specific problem domain, and are a traditional application and/or subfield of artificial intelligence (AI). A wide variety of methods can be used to simulate the performance of the expert; however, common to most or all are: 1) the creation of a knowledge base which uses some knowledge representation structure to capture the knowledge of the Subject Matter Expert (SME); 2) a process of gathering that knowledge from the SME and codifying it according to the structure, which is called knowledge engineering; and 3) once the system is developed, it is placed in the same real world problem solving situation as the human SME, typically as an aid to human workers or as a supplement to some information system. Expert systems may or may not have learning components.
Expert systems are a type of AI that uses shells. Shells are small programs that maintain information for a specific function within a larger program framework. Shells also interact with users to gather data, which is applied to the knowledge base using the logical set of rules set up by the system. By combining input with existing data and the logical rules, the expert system will arrive at a conclusion.
Examples of applications
Expert systems are designed to facilitate tasks in the fields of accounting, medicine, process control, financial service, production, human resources, among others. Typically, the problem area is complex enough that a more simple traditional algorithm cannot provide a proper solution. The foundation of a successful expert system depends on a series of technical procedures and development that may be designed by technicians and related experts. As such, expert systems do not typically provide a definitive answer, but provide probabilistic recommendations.
An example of the application of expert systems in the financial field is expert systems for mortgages. Loan departments are interested in expert systems for mortgages because of the growing cost of labour, which makes the handling and acceptance of relatively small loans less profitable. They also see a possibility for standardised, efficient handling of mortgage loan by applying expert systems, appreciating that for the acceptance of mortgages there are hard and fast rules which do not always exist with other types of loans. Another common application in the financial area for expert systems are in trading recommendations in various marketplaces. These markets involve numerous variables and human emotions which may be impossible to deterministically characterize, thus expert systems based on the rules of thumb from experts and simulation data are used. Expert system of this type can range from ones providing regional retail recommendations, like Wishabi, to ones used to assist monetary decisions by financial institutions and governments.
Another 1970s and 1980s application of expert systems, which we today would simply call AI, was in computer games. For example, the computer baseball games Earl Weaver Baseball and Tony La Russa Baseball each had highly detailed simulations of the game strategies of those two baseball managers. When a human played the game against the computer, the computer queried the Earl Weaver or Tony La Russa Expert System for a decision on what strategy to follow. Even those choices where some randomness was part of the natural system (such as when to throw a surprise pitch-out to try to trick a runner trying to steal a base) were decided based on probabilities supplied by Weaver or La Russa. Today we would simply say that "the game's AI provided the opposing manager's strategy."

Financial Expert Systems: Loans
In the finance realm, these systems are used to evaluate criteria based on pre-set logical guidelines and information. When creating the expert system, the programmer must rely on true experts to provide the rules for evaluation. For example, a consumer loan application made by an unemployed person with a poor credit score and no income would quickly be rejected. This allows banks and other lenders to quickly pre-screen credit offers without devoting man-hours to the evaluation process.
Technical Support Systems
A technical support system powered by shell programming can also be extremely effective for screening out simple solutions. Once the system is populated with common problems and solutions, users can benefit from the individual applications to each situation, while the tech department benefits from a reduced number of routine calls.
Expert System Applications in Real Life
Examples of expert system applications, such as loan evaluators and technical support systems, show how shell programming is very useful to businesses in real life. Although there are many benefits, it's worthwhile to keep in mind that this type of system is limited both by the quality of the data in the knowledge base, and the thoroughness of the rules by which the system operates. A low quality of information or poorly expressed logical arguments can result in the failure of the system to reach appropriate conclusions. In other words, expert systems are the embodiment of the phrase, "Garbage in, garbage out."
Artificial Intelligence Applications
Looking for more information about artificial intelligence applications in real life? Check out ways to use smart home appliances for a better quality of life: Combining artificial intelligence with home automation is great for anyone with mobility or dexterity issues, including the elderly and disabled. Also, disabled students are often at a disadvantage in the classroom. Voice recognition software improves communication, enables note-taking, and increases participation in classroom activities. Speaking of kids, robots are acting as therapy assistants to help parents and therapists in teaching special needs children with autism. Imagine - fuzzy logic, helping autistic kids!


(courtesy & source google)

SOFTware engineering(courtsey google)

Requirements Analysis Process: Requirements Elicitation, Analysis And Specification
Requirements Analysis is the process of understanding the customer needs and expectations from a proposed system or application and is a well-defined stage in the Software Development Life Cycle model.
Requirements are a description of how a system should behave or a description of system properties or attributes. It can alternatively be a statement of ‘what’ an application is expected to do.
Requirements analysis in systems engineering and software engineering, encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users.
Requirements analysis is critical to the success of a development project.[2] Requirements must be documented, actionable, measurable, testable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design. Requirements can be architectural, structural, behavioral, functional, and non-functional.
Conceptually, requirements analysis includes three types of activity:
• Eliciting requirements: the task of communicating with customers and users to determine what their requirements are. This is sometimes also called requirements gathering.
• Analyzing requirements: determining whether the stated requirements are unclear, incomplete, ambiguous, or contradictory, and then resolving these issues.
• Recording requirements: Requirements might be documented in various forms, such as natural-language documents, use cases, user stories, or process specifications.
Requirements analysis can be a long and arduous process during which many delicate psychological skills are involved. New systems change the environment and relationships between people, so it is important to identify all the stakeholders, take into account all their needs and ensure they understand the implications of the new systems. Analysts can employ several techniques to elicit the requirements from the customer. Historically, this has included such things as holding interviews, or holding focus groups (more aptly named in this context as requirements workshops) and creating requirements lists. More modern techniques include prototyping, and use cases. Where necessary, the analyst will employ a combination of these methods to establish the exact requirements of the stakeholders, so that a system that meets the business needs is produced.
Requirements engineering
Systematic requirements analysis is also known as requirements engineering.[3] It is sometimes referred to loosely by names such as requirements gathering, requirements capture, or requirements specification. The term requirements analysis can also be applied specifically to the analysis proper, as opposed to elicitation or documentation of the requirements, for instance. Requirements Engineering can be divided into discrete chronological steps:
• Requirements elicitation,
• Requirements analysis and negotiation,
• Requirements specification,
• System modeling,
• Requirements validation,
• Requirements management.
Requirement engineering according to Laplante (2007) is "a subdiscipline of systems engineering and software engineering that is concerned with determining the goals, functions, and constraints of hardware and software systems."[4] In some life cycle models, the requirement engineering process begins with a feasibility study activity, which leads to a feasibility report. If the feasibility study suggests that the product should be developed, then requirement analysis can begin. If requirement analysis precedes feasibility studies, which may foster outside the box thinking, then feasibility should be determined before requirements are finalized.
Stakeholder identification
See Stakeholder analysis for a discussion of business uses. Stakeholders (SH) are people or organizations (legal entities such as companies, standards bodies) which have a valid interest in the system. They may be affected by it either directly or indirectly. A major new emphasis in the 1990s was a focus on the identification of stakeholders. It is increasingly recognized that stakeholders are not limited to the organization employing the analyst. Other stakeholders will include:
• anyone who operates the system (normal and maintenance operators)
• anyone who benefits from the system (functional, political, financial and social beneficiaries)
• anyone involved in purchasing or procuring the system. In a mass-market product organization, product management, marketing and sometimes sales act as surrogate consumers (mass-market customers) to guide development of the product
• organizations which regulate aspects of the system (financial, safety, and other regulators)
• people or organizations opposed to the system (negative stakeholders; see also Misuse case)
• organizations responsible for systems which interface with the system under design
• those organizations who integrate horizontally with the organization for whom the analyst is designing the system
Stakeholder interviews
Stakeholder interviews are a common technique used in requirement analysis. Though they are generally idiosyncratic in nature and focused upon the perspectives and perceived needs of the stakeholder, very often without larger enterprise or system context, this perspective deficiency has the general advantage of obtaining a much richer understanding of the stakeholder's unique business processes, decision-relevant business rules, and perceived needs. Consequently this technique can serve as a means of obtaining the highly focused knowledge that is often not elicited in Joint Requirements Development sessions, where the stakeholder's attention is compelled to assume a more cross-functional context. Moreover, the in-person nature of the interviews provides a more relaxed environment where lines of thought may be explored at length.
Joint Requirements Development (JRD) Sessions
Requirements often have cross-functional implications that are unknown to individual stakeholders and often missed or incompletely defined during stakeholder interviews. These cross-functional implications can be elicited by conducting JRD sessions in a controlled environment, facilitated by a trained facilitator, wherein stakeholders participate in discussions to elicit requirements, analyze their details and uncover cross-functional implications. A dedicated scribe and Business Analyst should be present to document the discussion. Utilizing the skills of a trained facilitator to guide the discussion frees the Business Analyst to focus on the requirements definition process.
JRD Sessions are analogous to Joint Application Design Sessions. In the former, the sessions elicit requirements that guide design, whereas the latter elicit the specific design features to be implemented in satisfaction of elicited requirements.
Contract-style requirement lists
One traditional way of documenting requirements has been contract style requirement lists. In a complex system such requirements lists can run to hundreds of pages.
An appropriate metaphor would be an extremely long shopping list. Such lists are very much out of favour in modern analysis; as they have proved spectacularly unsuccessful at achieving their aims; but they are still seen to this day.
Strengths
• Provides a checklist of requirements.
• Provide a contract between the project sponsor(s) and developers.
• For a large system can provide a high level description.
] Weaknesses
• Such lists can run to hundreds of pages. It is virtually impossible to read such documents as a whole and have a coherent understanding of the system.
• Such requirements lists abstract all the requirements and so there is little context
• This abstraction makes it impossible to see how the requirements fit or work together.
• This abstraction makes it difficult to prioritize requirements properly; while a list does make it easy to prioritize each individual item, removing one item out of context can render an entire use case or business requirement useless.
• This abstraction increases the likelihood of misinterpreting the requirements; as more people read them, the number of (different) interpretations of the envisioned system increase.
• This abstraction means that it's extremely difficult to be sure that you have the majority of the requirements. Necessarily, these documents speak in generality; but the devil, as they say, is in the details.
• These lists create a false sense of mutual understanding between the stakeholders and developers.
• These contract style lists give the stakeholders a false sense of security that the developers must achieve certain things. However, due to the nature of these lists, they inevitably miss out crucial requirements which are identified later in the process. Developers can use these discovered requirements to renegotiate the terms and conditions in their favour.
• These requirements lists are no help in system design, since they do not lend themselves to application.
Alternative to requirement lists
As an alternative to the large, pre-defined requirement lists Agile Software Development uses User stories to define a requirement in every day language.
Measurable goals
Main article: Goal modeling
Best practices take the composed list of requirements merely as clues and repeatedly ask "why?" until the actual business purposes are discovered. Stakeholders and developers can then devise tests to measure what level of each goal has been achieved thus far. Such goals change more slowly than the long list of specific but unmeasured requirements. Once a small set of critical, measured goals has been established, rapid prototyping and short iterative development phases may proceed to deliver actual stakeholder value long before the project is half over.
Prototypes
In the mid-1980s, prototyping was seen as the best solution to the requirements analysis problem. Prototypes are Mockups of an application. Mockups allow users to visualize an application that hasn't yet been constructed. Prototypes help users get an idea of what the system will look like, and make it easier for users to make design decisions without waiting for the system to be built. Major improvements in communication between users and developers were often seen with the introduction of prototypes. Early views of applications led to fewer changes later and hence reduced overall costs considerably.
However, over the next decade, while proving a useful technique, prototyping did not solve the requirements problem:
• Managers, once they see a prototype, may have a hard time understanding that the finished design will not be produced for some time.
• Designers often feel compelled to use patched together prototype code in the real system, because they are afraid to 'waste time' starting again.
• Prototypes principally help with design decisions and user interface design. However, they can not tell you what the requirements originally were.
• Designers and end-users can focus too much on user interface design and too little on producing a system that serves the business process.
• Prototypes work well for user interfaces, screen layout and screen flow but are not so useful for batch or asynchronous processes which may involve complex database updates and/or calculations.
Prototypes can be flat diagrams (often referred to as wireframes) or working applications using synthesized functionality. Wireframes are made in a variety of graphic design documents, and often remove all color from the design (i.e. use a greyscale color palette) in instances where the final software is expected to have graphic design applied to it. This helps to prevent confusion over the final visual look and feel of the application.
Use cases
Main article: Use case
A use case is a technique for documenting the potential requirements of a new system or software change. Each use case provides one or more scenarios that convey how the system should interact with the end-user or another system to achieve a specific business goal. Use cases typically avoid technical jargon, preferring instead the language of the end-user or domain expert. Use cases are often co-authored by requirements engineers and stakeholders.
Use cases are deceptively simple tools for describing the behavior of software or systems. A use case contains a textual description of all of the ways which the intended users could work with the software or system. Use cases do not describe any internal workings of the system, nor do they explain how that system will be implemented. They simply show the steps that a user follows to perform a task. All the ways that users interact with a system can be described in this manner.
Software requirements specification
A software requirements specification (SRS) is a complete description of the behavior of the system to be developed. It includes a set of use cases that describe all of the interactions that the users will have with the software. Use cases are also known as functional requirements. In addition to use cases, the SRS also contains nonfunctional (or supplementary) requirements. Non-functional requirements are requirements which impose constraints on the design or implementation (such as performance requirements, quality standards, or design constraints).
Recommended approaches for the specification of software requirements are described by IEEE 830-1998. This standard describes possible structures, desirable contents, and qualities of a software requirements specification.
Types of Requirements
Customer Requirements
Statements of fact and assumptions that define the expectations of the system in terms of mission objectives, environment, constraints, and measures of effectiveness and suitability (MOE/MOS). The customers are those that perform the eight primary functions of systems engineering, with special emphasis on the operator as the key customer. Operational requirements will define the basic need and, at a minimum, answer the questions posed in the following listing:[1]
• Operational distribution or deployment: Where will the system be used?
• Mission profile or scenario: How will the system accomplish its mission objective?
• Performance and related parameters: What are the critical system parameters to accomplish the mission?
• Utilization environments: How are the various system components to be used?
• Effectiveness requirements: How effective or efficient must the system be in performing its mission?
• Operational life cycle: How long will the system be in use by the user?
• Environment: What environments will the system be expected to operate in an effective manner?
Architectural Requirements
Architectural requirements explain what has to be done by identifying the necessary system architecture of a system.
Structural Requirements
Structural requirements explain what has to be done by identifying the necessary structure of a system.
Behavioral Requirements
Behavioral requirements explain what has to be done by identifying the necessary behavior of a system.
Functional Requirements
Functional requirements explain what has to be done by identifying the necessary task, action or activity that must be accomplished. Functional requirements analysis will be used as the toplevel functions for functional analysis.[1]
Non-functional Requirements
Non-functional requirements are requirements that specify criteria that can be used to judge the operation of a system, rather than specific behaviors.
Performance Requirements
The extent to which a mission or function must be executed; generally measured in terms of quantity, quality, coverage, timeliness or readiness. During requirements analysis, performance (how well does it have to be done) requirements will be interactively developed across all identified functions based on system life cycle factors; and characterized in terms of the degree of certainty in their estimate, the degree of criticality to system success, and their relationship to other requirements.[1]
Design Requirements
The “build to,” “code to,” and “buy to” requirements for products and “how to execute” requirements for processes expressed in technical data packages and technical manuals.[1]
Derived Requirements
Requirements that are implied or transformed from higher-level requirement. For example, a requirement for long range or high speed may result in a design requirement for low weight.[1]
Allocated Requirements
A requirement that is established by dividing or otherwise allocating a high-level requirement into multiple lower-level requirements. Example: A 100-pound item that consists of two subsystems might result in weight requirements of 70 pounds and 30 pounds for the two lower-level items.[1]

Given the multiple levels of interaction between users, business processes and devices in global corporations today, there are simultaneous and complex requirements from a single application, from various levels within an organization and outside it as well.
The Software Requirements Analysis Process covers the complex task of eliciting and documenting the requirements of all these users, modeling and analyzing these requirements and documenting them as a basis for system design.
A dedicated and specialized Requirements Analyst is best equipped to handle the job. The Requirements Analysis function may also fall under the scope of Project Manager, Program Manager or Business Analyst, depending on the organizational hierarchy.
Software Requirements Analysis and Documentation Processes are critical to software project success. Requirements Engineering is an emerging field which deals with the systematic handling of requirements.
Why is Requirements Analysis necessary?
Studies reveal that inadequate attention to Software Requirements Analysis at the beginning of a project is the most common cause for critically vulnerable projects that often do not deliver even on the basic tasks for which they were designed. There are instances of corporations that have spent huge amounts on software projects where the end application eventually does not perform the tasks it was intended for.
Software companies are now investing time and resources into effective and streamlined Software Requirements Analysis Processes as a prerequisite to successful projects that align with the client’s business goals and meet the project’s requirement specifications.
Steps in the Requirements Analysis Process

(courtsey google) source google

I. Fix system boundaries
This initial step helps in identifying how the new application integrates with the business processes, how it fits into the larger picture and what its scope and limitations will be.
II. Identify the customer
In more recent times there has been a focus on identifying who the ‘users’ or ‘customers’ of an application are. Referred to broadly as the ‘stake holders’, these indicate the group or groups of people who will be directly or indirectly impacted by the new application.
By defining in concrete terms who the intended user is, the Requirements Analyst knows in advance where he has to look for answers. The Requirements Elicitation Process should focus on the wish-list of this defined group to arrive at a valid requirements list.
III. Requirements elicitation
Information is gathered from the multiple stakeholders identified. The Requirements Analyst draws out from each of these groups what their requirements from the application are and what they expect the application to accomplish.
Considering the multiple stakeholders involved, the list of requirements gathered in this manner could run into pages. The level of detail of the requirements list is based on the number and size of user groups, the degree of complexity of business processes and the size of the application.
a) Problems faced in Requirements Elicitation
• Ambiguous understanding of processes
• Inconsistency within a single process by multiple users
• Insufficient input from stakeholders
• Conflicting stakeholder interests
• Changes in requirements after project has begun
A Requirements Analyst has to interact closely with multiple work-groups, often with conflicting goals, to arrive at a bona fide requirements list. Strong communication and people skills along with sound programming knowledge are prerequisites for an expert Requirements Analyst.
b) Tools used in Requirements Elicitation
Traditional methods of Requirements Elicitation included stakeholder interviews and focus group studies. Other methods like flowcharting of business processes and the use of existing documentation like user manuals, organizational charts, process models and systems or process specifications, on-site analysis, interviews with end-users, market research and competitor analysis were also used extensively in Requirements Elicitation.
However current research in Software Requirements Analysis Process has thrown up modern tools that are better equipped to handle the complex and multilayered process of Requirements Elicitation. Some of the current Requirements Elicitation tools in use are:
• Prototypes
• Use cases
• Data flow diagrams
• Transition process diagrams
• User interfaces
IV. Requirements Analysis Process
Once all stakeholder requirements have been gathered, a structured analysis of these can be done after modeling the requirements. Some of the Software Requirements Analysis techniques used are requirements animation, automated reasoning, knowledge-based critiquing, consistency checking, analogical and case-based reasoning.
V. Requirements Specification
Requirements, once elicited, modeled and analyzed should be documented in clear, unambiguous terms. A written requirements document is critical so that its circulation is possible among all stakeholders including the client, user-groups, the development and testing teams. Current requirements engineering practices reveal that a well-designed, clearly documented Requirements Specification is vital and serves as a:
• Base for validating the stated requirements and resolving stakeholder conflicts, if any
• Contract between the client and development team
• Basis for systems design for the development team
• Bench-mark for project managers for planning project development lifecycle and goals
• Source for formulating test plans for QA and testing teams
• Resource for requirements management and requirements tracing
• Basis for evolving requirements over the project life span
Software requirements specification involves scoping the requirements so that it meets the customer’s vision. It is the result of collaboration between the end-user who is often not a technical expert, and a Technical/Systems Analyst, who is likely to approach the situation in technical terms.
The software requirements specification is a document that lists out stakeholders’ needs and communicates these to the technical community that will design and build the system. The challenge of a well-written requirements specification is to clearly communicate to both these groups and all the sub-groups within.
To overcome this, Requirements Specifications may be documented separately as
• User Requirements - written in clear, precise language with plain text and use cases, for the benefit of the customer and end-user
• System Requirements - expressed as a programming or mathematical model, addressing the Application Development Team and QA and Testing Team.
Requirements Specification serves as a starting point for software, hardware and database design. It describes the function (Functional and Non-Functional specifications) of the system, performance of the system and the operational and user-interface constraints that will govern system development.
VI. Requirements Management
Requirements Management is the comprehensive process that includes all aspects of software requirements analysis and additionally ensures verification, validation and traceability of requirements. Effective requirements management practices guarantee that all system requirements are stated unambiguously, that omissions and errors are corrected and that evolving specifications can be incorporated later in the project lifecycle.

03 March 2011

AO* algo

Rather than the two lists, OPEN and CLOSED,
that were used in the A* algorithm, the AO* algorithm will use a single structure GRAPH, representing the part
of the search graph that has been explicitly generated so far. Each node in the graph will point both dow
immediate successors and up to its immediate predecessors. Each node in the graph will also have associated
with it an h' value, an estimate of the cost of a path from itself to a set of solution nodes. We will not store g (the
cost of getting from the start node to the current node) as we did in the A* algorithm. It is not possible to
compute a single such value since there may be many paths to the same state. And such a value is not nec
because of the top-down traversing of the best-known path, which guarantees that only nodes that are on the
best path will ever be considered for expansion. So h' will serve as the estimate of goodness of a node.
Algorithm: AO*
Let GRAPH consist only of the node representing the initial state. (Call this node INIT.) Compute
h'(INIT).
1.
Until INIT is labeled SOLVED or until INIT's h' value becomes greater than FUTILITY, repeat the
following procedure:
2.
Trace the labeled arcs from INIT
and select for expansion one of the as yet unexpanded nodes that occurs on this path. Call the
selected node NODE.
a.
Generate the successors of NODE. If there are none, then assign FUTILITY as the h' value of
NODE. This is equivalent to saying that NODE is not solvable. If there are successors, then for
each one (called SUCCESSOR), do the following:
b.
i. Add SUCCESSOR to GRAPH.
ii. If SUCCESSOR is a terminal node, label it SOLVED and assign it an h' value of 0.
iii. If SUCCESSOR is not a terminal node, compute its h' value.
Propagate the newly discovered information up the graph by doing the following: Let S be a set of
nodes that have been labeled SOLVED or whose h' values have been changed and so need to have
values propagated back to their parents. Initialize S to NODE. Until S is empty, repeat the following
procedure:
c.
If possible, select from S a node none of whose descendants in GRAPH occurs in S. If there is
no such node, select any node from S. Call this node CURRENT, and remove it from S.
i.
Compute the cost of each of the arcs emerging from CURRENT. The cost of each arc is equal
to the sum of the h'
values of each of the nodes at the end of the arc plus whatever the cost of the arc itself is.
Assign as CURRENT's new h'
value the minimum of the costs just computed for the arcs emerging from it.
ii.
Mark the best path out of CURRENT by marking the arc that had the minimum cost as
computed in the previous step.
iii.
.
Mark CURRENT SOLVED
if all of the nodes connected to it through the new labeled arc have been labeled SOLVED.
iv.
If CURRENT has been labeled SOLVED or if the cost of CURRENT was just changed, then
its new status must be propagated back up the graph. So add all of the ancestors of
CURRENT to S.
v.
It is worth noticing a couple of points
about the operation of this algorithm. In step 2(c)v, the ancestors of a node whose cost was altered are added to
the set of nodes whose costs must also be revised. As stated, the algorithm will insert all the node's ancestors
into the set, which may result in the propagation of the cost change back up through a large number of paths that
are already known not to be very good. For example, in Figure 3.11, it is clear that the path through C will
always be better than the path through B,
so work expended on the path through B is wasted. But if the cost of E is revised and that change is no
propagated up through B as well as through
C, B may appear to be better. For example, if, as a result of expanding node E, we update its cost to 10, then the
cost of C will be updated to 11. If this is all that is done, then, when A is examined, the path through B will
have a cost of only 11 compared to 12 for the path through C, and it will be labeled erroneously as the most
promising path.
In this example, the mistake might be
detected at the next step, during which D will be expanded. If its cost changes and is propagated back to B, B
cost will be recomputed and the new cost of E will be used. Then the new cost of B will propagate back to A. At
that point, the path through C will again be better. All that happened was that some time was wasted in
expanding D. But if the node whose cost has changed is farther down in the search graph, the error may never be detected. An example of this is shown in Figure 3.12(a). If the cost of G is revised as shown in Figure
3.12(b) and if it is not immediately propagated back to E, then the change will never be recorded and a
nonoptimal solution through B may be discovered.
A second point concerns the termination of the backward cost propagation of step 2(c). Because GRAPH may
contain cycles, there is no guarantee that this process will terminate simply because it reaches the "top" of t
graph. It turns out that the process can be guaranteed to terminate for a different reason, though. The proof of
this is left as an exercise.

14 February 2011

some notes on ai , courtesy of goolge search

(AI) The subfield of computer science concerned with the concepts and methods of symbolic inference by computer and symbolic knowledge representation for use in making inferences. AI can be seen as an attempt to model aspects of human thought on computers. It is also sometimes defined as trying to solve by computer any problem that a human can solve faster.
Examples of AI problems are computer vision (building a system that can understand images as well as a human) and natural language processing (building a system that can understand and speak a human language as well as a human). These may appear to be modular, but all attempts so far (1993) to solve them have foundered on the amount of context information and "intelligence" they seem to require.
The term is often used as a selling point, e.g. to describe programming that drives the behaviour of computer characters in a game. This is often no more intelligent than "Kill any humans you see; keep walking; avoid solid objects; duck if a human with a gun can see you"
1. The ability of a computer or other machine to perform those activities that are normally thought to require intelligence.
2. The branch of computer science concerned with the development of machines having this ability.
3. artificial intelligence (AI): The capability of a device to perform functions that are normally associated with human intelligence, such as reasoning and optimization through experience. Note: AI is the branch of computer science that attempts to approximate the results of human reasoning by organizing and manipulating factual and heuristic knowledge. Areas of AI activity include expert systems, natural language understanding, speech recognition, vision, and robotics.
The branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence includes
• games playing: programming computers to play games such as chess and checkers
• expert systems : programming computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)
• natural language : programming computers to understand natural human languages
• neural networks : Systems that simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains
• robotics : programming computers to see and hear and react to other sensory stimuli
Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. In May, 1997, an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match.
In the area of robotics, computers are now widely used in assembly plants, but they are capable only of very limited tasks. Robots have great difficulty identifying objects based on appearance or feel, and they still move and handle objects clumsily.
Natural-language processing offers the greatest potential rewards because it would allow people to interact with computers without needing any specialized knowledge. You could simply walk up to a computer and talk to it. Unfortunately, programming computers to understand natural languages has proved to be more difficult than originally thought. Some rudimentary translation systems that translate from one human language to another are in existence, but they are not nearly as good as human translators. There are also voice recognition systems that can convert spoken sounds into written words, but they do not understand what they are writing; they simply take dictation. Even these systems are quite limited -- you must speak slowly and distinctly.
In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of computers in general. To date, however, they have not lived up to expectations. Many expert systems help human experts in such fields as medicine and engineering, but they are very expensive to produce and are helpful only in special situations.
Today, the hottest area of artificial intelligence is neural networks, which are proving successful in a number of disciplines such as voice recognition and natural-language processing.
There are several programming languages that are known as AI languages because they are used almost exclusively for AI applications. The two most common are LISP and Prolog.

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it

A production system
consists of a set of rules that are in if-then form. That is given a particular situation, what are the actions to be performed. For example, if it is raining then take umbrella.
Production system also contains knowledge base, control strategy and a rule applier. To solve a problem, a system will compare the present situation with the left hand side of the rules. If there is a match then the system will perform the actions described in the right hand side of the corresponding rule.
A production system (or production rule system) is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior. These rules, termed productions, are a basic representation found useful in automated planning, expert systems and action selection. A production system provides the mechanism necessary to execute productions in order to achieve some goal for the system.
Productions consist of two parts: a sensory precondition (or "IF" statement) and an action (or "THEN"). If a production's precondition matches the current state of the world, then the production is said to be triggered. If a production's action is executed, it is said to have fired. A production system also contains a database, sometimes called working memory, which maintains data about current state or knowledge, and a rule interpreter. The rule interpreter must provide a mechanism for prioritizing productions when more than one is triggered.

Production Systems
A production system is a tool used in artificial intelligence and especially within the applied AI domain known as expert systems. Production systems consist of a database of rules, a working memory, a matcher, and a procedure that resolves conflicts between rules. These components are outlined below. Several different versions of productions systems have been developed, including the OPs series which culminated in OPS5 (see Forgy). OPS5 was modified to implement the Soar production system described elsewhere in this document.
Matching
The rules of a production consist of a condition and action in the form: (if x then y). The left-hand-side conditions (x and y may be arbitrarily complex conjunctions of expressions) are compared against the elements of working memory to determine if the conditions are satisfied. Matching is an computationally intense procedure although the RETE algorithm of OPS5 is significantly more efficient than a simple condition-by-condition matcher.
Conflict Resolution
At any point in processing, several productions may match to the elements of working memory simultaneously. Since production systems are normally implemented on serial computers, this results in a conflict: there is a non-unique choice about which action to take next. Most conflict resolution schemes are very simple, dependent on the number of conditions in the production, the time stamps (ages) of the elements to which the conditions matched, or completely random. One of the advantages of production systems is that the computational complexity of the matcher, while large, is deterministically finite and the conflict resolution scheme is trivial. This is in contrast to logicist systems in which declarative knowledge may be accessed instantly but the time required to use the knowledge (in a theorem prover, for instance) can not be pre-determined.
Actions
The actions of productions are manipulations to working memory. Elements may be added, deleted and modified. Since elements may be added and deleted, the production system is non-monotonic: the addition of new knowledge may obviate previous knowledge. Non-monotonicity increases the significance of the conflict resolution scheme since productions which match in one cycle may not match in the following because of the action of the intervening production. Some production systems are monotonic, however, and only add elements to working memory, never deleting or modifying knowledge through the action of production rules. Such systems may be regarded as implicitly parallel since all rules that match will be fired regardless of which is fired first.
Architectures using explicit productions include:
• A Meta-reasoning Architecture for 'X' (MAX)
• Soar
• Teton
Production System. Types of Production Systems.
A system that uses this form of knowledge representation is called a production system.

Example:-



IF the initial state is a goal state THEN quit.



The major components of an AI production system are



i. A global database



ii. A set of production rules and



iii. A control system



The goal database is the central data structure used by an AI production system. The production system. The production rules operate on the global database. Each rule has a precondition that is either satisfied or not by the database. If the precondition is satisfied, the rule can be applied. Application of the rule changes the database. The control system chooses which applicable rule should be applied and ceases computation when a termination condition on the database is satisfied. If several rules are to fire at the same time, the control system resolves the conflicts.



Four classes of production systems:-

1. A monotonic production system



2. A non monotonic production system



3. A partially commutative production system



4. A commutative production system.

Advantages of production systems:-

1. Production systems provide an excellent tool for structuring AI programs.



2. Production Systems are highly modular because the individual rules can be added, removed or modified independently.



3. The production rules are expressed in a natural form, so the statements contained in the knowledge base should the a recording of an expert thinking out loud.

Disadvantages of Production Systems:-



One important disadvantage is the fact that it may be very difficult analyse the flow of control within a production system because the individual rules don’t call each other.
Production systems describe the operations that can be performed in a search for a solution to the problem. They can be classified as follows.

Monotonic production system :- A system in which the application of a r4ule never prevents the later application of another rule, that could have also been applied at the time the first rule was selected.

Partially commutative production system:-

A production system in which the application of a particular sequence of rules transforms state X into state Y, then any permutation of those rules that is allowable also transforms state x into state Y.

Theorem proving falls under monotonic partially communicative system. Blocks world and 8 puzzle problems like chemical analysis and synthesis come under monotonic, not partially commutative systems. Playing the game of bridge comes under non monotonic , not partially commutative system.

For any problem, several production systems exist. Some will be efficient than others. Though it may seem that there is no relationship between kinds of problems and kinds of production systems, in practice there is a definite relationship.

Partially commutative , monotonic production systems are useful for solving ignorable problems. These systems are important for man implementation standpoint because they can be implemented without the ability to backtrack to previous states, when it is discovered that an incorrect path was followed. Such systems increase the efficiency since it is not necessary to keep track of the changes made in the search process.

Monotonic partially commutative systems are useful for problems in which changes occur but can be reversed and in which the order of operation is not critical (ex: 8 puzzle problem).

Production systems that are not partially commutative are useful for many problems in which irreversible changes occur, such as chemical analysis. When dealing with such systems, the order in which operations are performed is very important and hence correct decisions have to be made at the first time itself.

Production System Characteristics
Production systems are important in building intelligent matches which can provide us a good set of production rules, for solving problems.
There are four types of production system characteristics, namely
1. Monotonic production system
2. Non-monotonic production system
3. Commutative law based production system, and lastly
4. Partially commutative law based production system
1. Monotonic Production System (MPS): The Monotonic production system (MPS) is a system in which the application of a rule never prevents later application of the another rule that could also have been applied at the time that the first rule was selected
2. Non-monotonic Production (NMPS): The non-monotonic production system is a system in which the application of a rule prevents the later application of the another rule which may not have been applied at the time that the first rule was selected, i.e. it is a system in which the above rule is not true, i.e. the monotonic production system rule not true.
3. Commutative Law Based Production System (CBPS): Commutative law based production systems is a system in which it satisfies both monotonic & partially commutative.
4. Partially Commutative Production System (PCPS): The partially commutative production system is a system with the property that if the application of those rules that is allowable & also transforms from state x to state ‘y’.
Laws (Characteristics) Monotonic (Characteristics) Non-monotonic
Partially commutative Theorem proving Robot navigation
Non-partial commutative Chemical synthesis Bridge game
Well the question may arise here such as:
- can the production systems be described by a set of characteristics?
- Also, can we draw the relationship between problem types & the types of production systems, suited to solve the problems, yes, we can by using above rules.
What is Transposition Table?
A transposition table is a table of previously encountered game states, together with their backed-up minimax values. Whenever a new state is generated, if it is stored in the transposition table, its stored value is used instead of searching the tree below the node. Transposition table can be used very effectively so that reachable search depth in chess, for example, can be doubled.
Complexity In Heuristic Search
December 18th, 2009 | Author: Robin
Heuristic Search: Complexity Of Finding Optimal Solutions
The time complexity of a heuristic search algorithm depends on the accuracy of the heuristic function. For example, if the heuristic evaluation function is an exact estimator, then A* runs in linear time, expanding only those nodes on an optimal solution path. Conversely, with a heuristic that returns zero everywhere, A* becomes uniform-cost search which has exponential complexity.
In general, the time complexity of A* and IDA* is an exponential function of the error in the heuristic function. For example, if the heuristic has constant absolute error, meaning that it never underestimates by more than a constant amount regardless of the magnitude of the estimate, then the running time of A* linear with respect to the solution cost. A more realistic assumption is constant relative error, which means that the error is a fixed percentage of the quantity being estimated. In that case, the running times of A* and IDA* are exponential. The base of the exponent, however, is smaller than the brute-force branching factor, reducing the asymptomatic complexity and allowing larger problems to be solved. For example, using appropriate admissible heuristic functions, IDA* can optimally solve random instance of the twenty-four puzzle and Rubik’s cube.
Depth First Search
December 18th, 2009 | Author: Robin
Depth First Search Algorithm
Depth First Search (DFS) searches deeper into the problem space. Breadth-first search always generates successor of the deepest unexpanded node. Depth First Search uses last-in first-out stack for keeping the unexpanded nodes. More commonly, depth-first search is implemented recursively, with the recursion stack taking the place of an explicit node stack.
Algorithm: Depth First Search
1.If the initial state is a goal state, quit and return success.
2.Otherwise, loop until success or failure is signaled.
a) Generate a state, say E, and let it be the successor of the initial state. If there is no successor, signal failure.
b) Call Depth-First Search with E as the initial state.
c) If success is returned, signal success. Otherwise continue in this loop.
Advantages of Depth-First Search
1. The advantage of depth-first Search is that memory requirement is only linear with respect to the search graph. This is in contrast with breadth-first search which requires more space. The reason is that the algorithm only needs to store a stack of nodes on the path from the root to the current node.
2. The time complexity of a depth-first Search to depth d is O(b^d) since it generates the same set of nodes as breadth-first search, but simply in a different order. Thus practically depth-first search is time-limited rather than space-limited.
3. If depth-first search finds solution without exploring much in a path then the time and space it takes will be very less.

Disadvantages of Depth-First Search
1. The disadvantage of Depth-First Search is that there is a possibility that it may go down the left-most path forever. Even a finite graph can generate an infinite tree. One solution to this problem is to impose a cutoff depth on the search. Although the ideal cutoff is the solution depth d and this value is rarely known in advance of actually solving the problem. If the chosen cutoff depth is less than d, the algorithm will fail to find a solution, whereas if the cutoff depth is greater than d, a large price is paid in execution time, and the first solution found may not be an optimal one.
2. Depth-First Search is not guaranteed to find the solution.
3. And there is no guarantee to find a minimal solution, if more than one solution exists.
December 18th, 2009 | Author: Robin
Uniform-Cost Search Algorithm
If all the edges in the search graph do not have the same cost then breadth-first search generalizes to uniform-cost search. Instead of expanding nodes in order of their depth from the root, uniform-cost search expands nodes in order of their cost from the root. At each step, the next step n to be expanded is one whose cost g(n) is lowest where g(n) is the sum of the edge costs from the root to node n. The nodes are stored in a priority queue. This algorithm is also known as Dijkstra’s single-source shortest algorithm.
Whenever a node is chosen for expansion by uniform cost search, a lowest-cost path to that node has been found. The worst case time complexity of uniform-cost search is O(bc/m), where c is the cost of an optimal solution and m is the minimum edge cost. Unfortunately, it also suggests the same memory limitation as breadth-first search.
Learning Real-Time A* (LRTA*) Search
If a problem is to be solved repeatedly with the same goal state but different intial state then one would like an algorithm that improves its performance over time. Learning Real-Time A* (LRTA*) is such an algorithm. It behaves almost identically to RTA*, except that instead of stroring the second-best f value of a node as its new heuristic value, it stores the best value instead. Once one problem instance is solved, the stored heuristic values are saved and become the minimal value for the next problem instance.
While LRTA* is less efficienct than RTA* for solving a sinle problem instance, if it starts with admissible initial heuristic values over repeated trials, its heuristic values eventually coverge to their exact values. At which point the algorithm returns optimal solutions.
What is Expert System?
Expert system is an artificial intelligence program that has expert-level knowledge about a particular domain and knows how to use its knowledge to respond properly. Domain refers to the area within which the task is being performed. Ideally the expert systems should substitute a human expert. Edward Feigenbaum of Stanford University has defined expert system as “an intelligent computer program that uses knowledge and inference procedures to solve problems that are difficult enough to require significant human expertise for their solutions.” It is a branch of artificial intelligence introduced by researchers in the Stanford Heuristic Programming Project.
The expert systems is a branch of AI designed to work within a particular domain. As an expert is a person who can solve a problem with the domain knowledge in hands it should be able to solve problems at the level of a human expert. The source of knowledge may come come from a human expert and/or from books, magazines and internet. As knowledge play a key role in the functioning of expert systems they are also known as knowledge-based systems and knowledge-based expert systems. The expert’s knowledge about solving the given specific problems is called knowledge domain of the expert.
Components of Expert Systems

Figure: Expert System
Basic Concept of an Expert System Function
The expert system consists of two major components: knowledge base and inference engine.
Knowledge base contains the domain knowledge which is used by the inference engine to draw conclusions. The inference engine is the generic control mechanism that applies the axiomatic knowledge to the task-specific data to arrive at some conclusion. When a user supplies facts or relevant information of query to the expert system he receives advice or expertise in response. That is given the facts it uses the inference engine which in turn uses the knowledge base to infer the solution.
Characteristics of Expert Systems
High performance: They should perform at the level of a human expert.
Adequate response time: They should have the ability to respond in a reasonable amount of time. Time is crucial especially for real time systems.
Reliability: They must be reliable and should not crash.
Understandable: They should not be a black box instead it should be able explain the steps of the reasoning process. It should justify its conclusions in the same way a human expert explains why he arrived at particular conclusion.
Shell
A shell is a special purpose tool designed based on the requirements of particular applications. User should supply the knowledge base to the shell. Example for the shell is EMYCIN (Empty MYCIN) shell. Shell manages the input and output. It processes the information given by the user, relates it to the concepts contained in the knowledge base, and provides solution for a particular problem.
Advantages of Expert Systems
Availability: Expert systems are availabe easily due to mass production software.
Cheaper: The cost of providing expertise is not expensive.
Reduced danger: They can be used in any risky environments where humans cannot work with.
Permanence: The knowledge will last long indefinitely.
Multiple expertise: It can be designed to have knowledge of many experts.
Explanation: They are capable of explaining in detail the reasoning that led to a conclusion.
Fast response: They can respond at great speed due to the inherent adavantages of computers over humans.
Unemotional and repsonse at all times: Unlike humans, they do not get tense, fatigue or panic and work steadily during emergency situations.

07 October 2010

ASCII bcd,EBCDIC

Representation of Characters: ASCII& EBCDIC Code
A decimal number to binary and the coding all have a series of bits. Bits obtained from coding are combinations of 1’s and 0’s arranged according to the rules of used codes. Our computer can understand the instructions written only in ‘0’ or ‘1’ binary code. So, all the digits\character are assigned a specific code in the form of ‘0’ or ‘1’. Different computers and devices need to communicate alphanumeric data amongst themselves. An alphanumeric data consists of only the letters A,B,C,D…,Z,a,b,c,d…..z and the blank spaces and numeric data consists of only numerals,0…9.These representation in form of ‘0’ or ‘1’ codes are divided into some forms of computer codes, some are ASCII-7, ASCII-8, EBCDIC, BCD.



BCD Code (binary code decmal)
Binary-coded decimal (BCD) is an encoding for decimal numbers in which each digit is represented by its own binary sequence. The binary coded decimal codes are one of the early memory codes. Its main virtue is that it allows easy conversion to decimal digits for printing or display and faster decimal calculations. It is based on the idea of converting each digit of a decimal number into its binary equivalent rather then converting the entire decimal value into a pure binary form.

To Remember:
• The binary coded decimal (BCD) the code is one of the early computer codes.
• This code is based on the idea of converting each digit of a decimal number into its 4 bit binary equivalent. In the BCD form 4 bits each represent all decimal digits. When only 4-bits are used a total of 24 i.e., 16 configurations are possible.
• To represent character- 6 bits are used. It is 6 bit code which allow 64 (2^6) different character.
• BCD number are useful whenever decimal information is transferred into a computer.
• The zone portion is first two bits and digit portion is last four bits
In the 6-bit code the four BCD numeric place position (1,2,4 and 8) are retained, but two additional Zone Positions are used in combination with the numeric bits to represent alphabetic and special positions are used in combination with the bits to represent alphabetic and special characters.
Example:
Conversion of vikram in BCD = 010011 111001 100010 101001 110001 100100
character Zone bits Numeric digits
8 4 2 1
V 0 1 0 0 1 1
I 1 1 1 0 0 1
K 1 0 0 0 1 0
R 1 0 1 0 0 1
A 1 1 0 0 0 1
M 1 0 0 1 0 0

When only 6 bits are used a total of 26 i.e., 64 configurations are possible, means 64 different characters can be represented. These are sufficient to code 10 decimal digits, 26 alphabets and other 28 special characters.

CHARACTER BCD CODE OCTAL EQUIVALENT
Zone Digit
A 11 0001 61
B 11 0010 62
C 11 0011 62
D 11 0100 64
E 11 0101 65
F 11 0110 66
G 11 0111 67
H 11 1000 70
I 11 1001 71

J 10 0001 41
K 10 0010 42
L 10 0011 43
M 10 0100 44
N 10 0101 45
O 10 0110 46
P 10 0111 47
Q 10 1000 50
R 10 1001 51

S 01 0001 22
T 01 0010 23
U 01 0011 24
V 01 0100 25
W 01 0101 26
X 01 0110 27
Y 01 1000 30
Z 01 1001 31

1 00 0001 01
2 00 0010 02
3 00 0011 03
4 00 0100 04
5 00 0101 05
6 00 0110 06
7 00 0111 07
8 00 1000 10
9 00 1001 11
0 00 0000 12

Example: Conversion of
“NEEL007” BCD = 100101 110101 110101 100011 001010 001010 000111

character Zone bits Numeric digits
8 4 2 1
N 1 0 0 1 0 1
E 1 1 0 1 0 1
E 1 1 0 1 0 1
L 1 0 0 0 1 1
0 0 0 1 0 1 0
0 0 0 1 0 1 0
7 0 0 0 1 1 1

Conversion of Rahul in BCD = 010011 111001 100010 101001 110001 100100
character Zone bits Numeric digits

1001
0001
1000
0100
0011
R
10
11
11
01
10
A

H

U

L

Zone code - Digit code, zone code - Digit code
101001110001111000010100100011

Drawbacks of BCD :
Its drawbacks are the increased complexity of circuits needed to implement mathematical operations and a relatively inefficient encoding.
It occupies more space than a pure binary representation
By this code only 64 characters can be represented, which are not sufficient.to provide for decimal numbers(10), lower cse letters(26), capital letters(26) and other special characters.
Pracitce:
Prove these:
CASE = 110111 110001 010001 010010 110101
Manisha = 100100 110001 100101 111001 010001 111000 110001
Practice: Change these in BCD
1. manisha
2. nirbhay99
3. lavanya36
4. engineertamra1789
5. singhbikaner

Packed BCD
A widely used variation of the two-digits-per-byte encoding is called packed BCD or simply packed decimal. All of the upper bytes of a multi-byte word plus the upper four bits (nibble) of the lowest byte are used to store decimal integers. The lower four bits of the lowest byte are used as the sign flag.
As an example, a 32 bit word contains 4 bytes or 8 nibbles. Packed BCD uses the upper 7 nibbles to store the integers of a decimal value and uses the lowest nibble to indicate the sign of those integers.
Sign
Digit BCD
8 4 2 1 Sign Notes
A 1 0 1 0 +
B 1 0 1 1 −
C 1 1 0 0 + Preferred
D 1 1 0 1 − Preferred
E 1 1 1 0 +


ASCII Code
American Standard Code For Information Interchange.
It is the most common encoding for characters. It is a way of how the characters should map to various numbers. In today’s scenario everyone uses this coding. It is built in binary code for representing characters in almost computers. But in IBM mainframe use the EBCDIC code system. Initially it was based on 7-bit coding system.
In data communication x6x5…….x0 is 2^7=128 character.
It is seven bit code system and is divided into two portion.
(1) Zone portion – x6x5x4
(2) Digit portion - x3x2x1
ASCII was originally developed for communications and uses only seven bit per character, providing 128 combinations that include upper & lower case alphabetic letters, the numeric digit and special symbols such as the $ and %.
ASCII reserves the first 32 codes (numbers 0–31 decimal) for control characters: codes originally intended not to carry printable information, but rather to control devices (such as printers) that make use of ASCII, or to provide meta-information about data streams such as those stored on magnetic tape. The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this left was sometimes intentional (where a character would be used slightly differently on a terminal link than on a data stream) and sometimes more accidental (such as what "delete" means).
The standard ASCII table defines 128 character codes (from 0 to 127), of which, the first 32 are control codes (non-printable), and the remaining 96 character codes are representable characters:
* 0 1 2 3 4 5 6 7 8 9 A B C D E F
0 NUL SOH STX ETX EOT ENQ ACK BEL BS TAB LF VT FF CR SO SI
1 DLE DC1 DC2 DC3 DC4 NAK SYN ETB CAN EM SUB ESC FS GS RS US
2 ! " # $ % & ' ( ) * + , - . /
3 0 1 2 3 4 5 6 7 8 9 : ; < = > ?
4 @ A B C D E F G H I J K L M N O
5 P Q R S T U V W X Y Z [ \ ] ^ _
6 ` A b c d e f g h i j k l m n o
7 p Q r s t u v w x y z { | } ~

* This panel is organized to be easily read in hexadecimal: row numbers represent the first digit and the column numbers represent the second one. For example, the "A" character is located at the 4th row and the 1st column, for that it would be represented in hexadecimal as 0x41 (65).
Because most systems nowadays work with 8bit bytes, which can represent 256 different values, in addition to the 128 standard ASCII codes there are other 128 that are known as extended ASCII, which are platform- and locale-dependent. So there is more than one extended ASCII character set.
TYPES-
ASCII is of two types :
1. ASCII-7
2. ASCII-8
ASCII-7
ASCII-7 is a 7-bit code that allows 128 different characetrs. The first 3 bits are used as zone bits and the last 4 bits indicate the digit. Micorcomputers using 8-bit byte use the 7-bit ASCII by leaving the leftmost first bit of each byte as a zero. ASCII-7 is a 7 bit code ,which allow 128 (2^7) different characters.
The first 3 bits are used as zone bits.
The last 4 bits indicate the digit.
Microcomputer using 8 bit byte (group of 8 bit for 1byte) use the 7 bit ASCII by leaving the left most first bit of each byte as a zero.

Table of ASCII-7
Character Ascii -7 Hexadecimal Equivalent
zone digit
code code
0 011 0000 30
1 011 0001 31
2 011 0010 32
3 011 0011 33
4 011 0100 34
5 011 0101 35
6 011 0110 36
7 011 0111 37
8 011 1000 38
9 011 1001 39

A 100 0001 41
B 100 0010 42
C 100 0011 43
D 100 0100 44
E 100 0101 45
F 100 0110 46
G 100 0111 47
H 100 1000 48
I 100 1001 49
J 100 1010 4A
K 100 1011 4B
L 100 1100 4C
M 100 1101 4D
N 100 1110 4E
O 100 1111 4F

P 101 0000 50
Q 101 0001 51
R 101 0010 52
S 101 0011 53
T 101 0100 54
U 101 0101 55
V 101 0110 56
W 101 0111 57
X 101 1000 58
Y 101 1001 59
Z 101 1010 5A




Example:
Conversion of vikram in ASCII-7 = 1010110 1001001 1001011 1010010 1000001 1001101
character Zone code Digit code
V 101 0110
I 100 1001
K 100 1011
R 101 0010
A 100 0001
M 100 1101

Example: Conversion of
“NEEL007” in ASCII-7 = 1001110 1000101 1000101 1001100 0110000 0110000 0110111


character Zone code Digit code
N 100 1110
E 100 0101
E 100 0101
L 100 1100
0 011 0000
0 011 0000
7 011 0111



Example : Change RAVI in ascii-7

Character ZONE CODE DIGIT CODE
R 101 0010
A 100 0001
V 101 0110
I 100 1001

Zone code - Digitcode,zonecode - Digit code
1010010100000110101101001001

Practice: Change these in asii-7
1. manisha
2. nirbhay99
3. lavanya36
4. engineertamra1789

Ascii-8
ASCII-8 is an extended version of ASCII-7.
• It is an 8 bit code ,which allow 256 (2^8) different characters, rather than 128 in ascii-7.
• The additional bit is added to the zone bits.
• The first 4 bits are used as zone bits.
• The last 4 bits indicate the digit.

Ascii-8 table:


Character Zone code Digit code Hexadecimal
Equivalent
0 0101 0000 50
1 0101 0001 51
2 0101 0010 52
3 0101 0011 53
4 0101 0100 54
5 0101 0101 55
6 0101 0110 56
7 0101 0111 57
8 0101 1000 58
9 0101 1001 59

A 1010 0001 A1
B 1010 0010 A2
C 1010 0011 A3
D 1010 0100 A4
E 1010 0101 A5
F 1010 0110 A6
G 1010 0111 A7
H 1010 1000 A8
I 1010 1001 A9
J 1010 1010 AA
K 1010 1011 AB
L 1010 1100 AC
M 1010 1101 AD
N 1010 1110 AE
O 1010 1111 AF

P 1011 0000 B0
Q 1011 0001 B1
R 1011 0010 B2
S 1011 0011 B3
T 1011 0100 B4
U 1011 0101 B5
V 1011 0110 B6
W 1011 0111 B7
X 1011 1000 B8
Y 1011 1001 B9
Z 1011 1010 BA

Example:

Character ZONE CODE DIGIT CODE
R 1011 0010
A 1010 0001
J 1010 1010
D 1010 0100
E 1010 0101
E 1010 0101
P 1011 0000

Zone code - Digitcode,zonecode - Digit code
10110010101000011010101010100100101001011010010110110000

Practice: Change these in asii-8
6. manisha
7. nirbhay99
8. lavanya36
9. engineertamra1789
10. singhbikaner




EBCDIC Code

EBCDIC (Extended Binary Coded Decimal Interchange Code) is a character encoding set used by IBM mainframes. Unlike virtually every computer system in the world which uses a variant of ASCII, IBM mainframes and midrange systems such as the AS/400 tend to use a wholly incompatible character set primarily designed for ease of use on punched cards.
The character encoding is based on Binary Coded Decimal (BCD), so the contiguous characters in the alphanumeric range are formed up in blocks of up to 10 from 0000 binary to 1001 binary. Non alphanumeric characters are almost all outside the BCD range. EBCDIC uses the full 8 bits available to it, so parity checking cannot be used on an 8 bit system. Also, EBCDIC has a wider range of control characters than ASCII.
To be remember:
• Extended Binary Coded Decimal Interchange Code, pronounced “eb-see-dick”.
• EBCDIC is an 8 bit code, it can be easily divided into two 4 bit group. Each of these 4-bit groups can be represented by 1 hexadecimal digit
• In this code, it is possible to represent 256(28) different characters instead of 64(26). Because EBCDIC is an 8-bit code,.
• The binary code for text as well as communication & printer control for IBM.
• This code can represent the following type of coded information.

2.) Lowercase letters – a,b,c….z
3.) Upper case letter eg. A,B, …Z
4.) Printable
5.) Nom printable
6.) Numeric values 0,1,….9
7.) Some special character such ass +,-,*,/,$ etc
In the BCd code only 64 characters can be represented, which are not sufficient to provide decimal number (10), lower case letter (26), capital letters (26) and large number of other special characetrs. Hence, the BCD code was extended from a 6-bit code to an 8-bit code. This coding scheme is called as EBCDIB for Extended bimary coded decimal interchange code.



Character EBCDIC CODE HEXADECIMAL EQUIVALENT
Zone code Digit code
A 1100 0001 C1
B 1100 0010 C2
C 1100 0011 C3
D 1100 0100 C4
E 1100 0101 C5
F 1100 0110 C6
G 1100 0111 C7
H 1100 1000 C8
I 1100 1001 C9

J 1101 0001 D1
K 1101 0010 D2
L 1101 0011 D3
M 1101 0100 D4
N 1101 0101 D5
O 1101 0110 D6
P 1101 0111 D7
Q 1101 1000 D8
R 1101 1001 D9

S 1111 0001 E2
T 1111 0010 E3
U 1111 0011 E4
V 1111 0100 E5
W 1111 0101 E6
X 1111 0110 E7
Y 1111 1000 E8
Z 01 1001 E9

1 1111 0001 F1
2 1111 0010 F2
3 1111 0011 F3
4 1111 0100 F4
5 1111 0101 F5
6 1111 0110 F6
7 1111 0111 F7
8 1111 1000 F8
9 1111 1001 F9
0 1111 1010 F0

Example:

Character ZONE CODE DIGIT CODE
R 1101 1001
A 1100 0001
J 1101 0001
Zone code - Digitcode,zonecode - Digit code
11011001 11000001 11010001


Advantages
• Scaling by a factor of 10 (or a power of 10) is simple; this is useful when a decimal scaling factor is needed to represent a non-integer quantity (e.g., in financial calculations)
• Alignment of two decimal numbers (for example 1.3 + 27.08) is a simple, exact, shift
• Rounding at a decimal digit boundary is simpler.
• Conversion to a character form or for display is a simple per-digit mapping, and can be done in linear time. A text-based format such as XML, or to drive signals for a seven-segment display. Conversion from pure binary involves relatively complex logic that spans digits, and for large numbers no linear-time conversion algorithm is known
• Some non-integral values, such as 0.3, have a finite place-value representation in decimal but not in binary; consequently a system based on binary place-value representations would introduce a small error representing such a value, which may be compounded by further computation if careful numerical considerations are not made.
Note that if computation is not performed on the value this is not an issue, since it suffices to represent it using enough bits that when rounded to the original number of decimal digits the original value is correctly recovered.
Disadvantages
Some operations are more complex to implement.
• Adders require extra logic to cause them to wrap and generate a carry early. 15–20% more circuitry is needed for BCD add compared to pure binary. Multiplication requires the use of algorithms that are somewhat more complex than shift-mask-add
Exam: a binary multiplication, requiring binary shifts and adds or the equivalent, per-digit or group of digits is required
• Standard BCD requires four bits per digit, roughly 20% more space than a binary encoding. When packed so that three digits are encoded in ten bits, the storage overhead is reduced to about 0.34%, at the expense of an encoding that is unaligned with the 8-bit byte boundaries common on existing hardware, resulting in slower implementations on these systems.
Practical existing implementations of BCD are typically slower than operations on binary representations, especially on embedded systems, due to limited processor support for native BCD operations.

Practice: Change these in EBCDIC
1. manisha
2. nirbhay99
3. lavanya36
4. engineertamra1789
5. singhbikaner

• An Indian Code

ISCII –Coding representation:
Indian Standard code for Information Interchange-
• This coding is an Indian information interchange. It is developed for representation of Indian script alphabets.
• It is 8-bit representation
• It also allows English vowels and consonants and other symbols.
• It is based on BRAMHI script.
USE:
• Mainly it is developed for our traditional religious books

big airthmetic operation

Airthmetic operations -Addition and subtraction
A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number is shifted right by three digits, and we then proceed with the usual addition method:
123456.7 = 1.234567 * 10^5
101.7654 = 1.017654 * 10^2 = 0.001017654 * 10^5

Hence:
123456.7 + 101.7654 = (1.234567 * 10^5) + (1.017654 * 10^2)
= (1.234567 * 10^5) + (0.001017654 * 10^5)
= (1.234567 + 0.001017654) * 10^5
= 1.235584654 * 10^5
In detail:
e=5; s=1.234567 (123456.7)
+ e=2; s=1.017654 (101.7654)

e=5; s=1.234567
+ e=5; s=0.001017654 (after shifting)
--------------------
e=5; s=1.235584654 (true sum: 123558.4654)
This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized if necessary. The final result is
e=5; s=1.235585 (final sum: 123558.5)
Note that the low 3 digits of the second operand (654) are essentially lost. This is round-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them:
e=5; s=1.234567
+ e=-3; s=9.876543

e=5; s=1.234567
+ e=5; s=0.00000009876543 (after shifting)
----------------------
e=5; s=1.23456709876543 (true sum)
e=5; s=1.234567 (after rounding/normalization)
Another problem of loss of significance occurs when two close numbers are subtracted. In the following example e = 5; s = 1.234571 and e = 5; s = 1.234567 are representations of the rationals 123457.1467 and 123456.659.
e=5; s=1.234571
- e=5; s=1.234567
----------------
e=5; s=0.000004
e=-1; s=4.000000 (after rounding/normalization)
The best representation of this difference is e = −1; s = 4.877000, which differs more than 20% from e = −1; s = 4.000000. In extreme cases, the final result may be zero even though an exact calculation may be several million. This cancellation illustrates the danger in assuming that all of the digits of a computed result are meaningful. Dealing with the consequences of these errors is a topic in numerical analysis; see also Accuracy problems.
Example: Add 0.1101E02 and 0.1001E05 Step:1 Equalize the exponent—0.1101E02 = 0.0001E05
Step.2 0.0001E05 . . .. …. 0.1001E05 …… 0.1002E05 -- Answer
Ex Example: Add 0.6532E99 and 0.5834E99 Step:1 Equalize the exponent—both exponent’s value are same
Step.2 0.6532E99 . . .. …. 0.5834E99 …… 1.2366E99 = 0.1236E100 = 001.236E99 - overflow condition exponent . . cannot store more than 2 digit
Example: Subtract 0.1101E02 from 0.1001E05 Step:1 Equalize the exponent—0.1101E02 = 0.0001E05
Step.2 0.1001E05 . . .. …. 0.0001E05 …… 0.1000E05---Answer
Ex Example: Subtract 0.6943E-99 from 0.7792E-99 Step:1 Equalize the exponent—both exponent’s value are same
Step.2 0.7792E-99 . . .. …. 0.6943E-99 …… 0.0849E-99 = 0.849E-100 = - underflow condition- exponent .. cannot store more than 2 digit
Multiplication
To multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized.
e=3; s=4.734612
× e=5; s=5.417242
-----------------------
e=8; s=25.648538980104 (true product)
e=8; s=25.64854 (after rounding)
e=9; s=2.564854 (after normalization)
Ex Example: Multiply 0.1328E03 from 0.6452E10 Step:1 multiply the mantissa----------0.1328 * 0.6452 = 0.08568256
Step.2 multiply the exponent-----E03*E10 =E 13
So answer is -0.8568E13
Ex Example: Multiply 0.7832E54 from 0.2316E51 Step:1 multiply the mantissa----------0.7832 * 0.2316 = 0.181389
Step.2 multiply the exponent-----E54*E51 =E 105
So answer is -0.181389E105-----------Overflow condition- exponent .. cannot store more than 2 digit

Division is done similarly, but is more complicated.
Ex Example: Divide 0.1620E05 by 0.1300E03 Step:1 Shift the dividend to the sufficient number of places to make the mantissa of dividend smaller then the mantissa of divisor-------0.1620 > 0.1300 So 0 .1620E05= 0.0162E06
Step.2 Divide .01620 / 0.1300= 0.1246, reminder = 200
Step-3 subtract the exponent-----E06 - E03 =E03
So answer is-- -0.1246E03
Ex Example: Divide 0.1234E99 by 0.1100E013 Step:1 Shift the dividend to the sufficient number of places to make the mantissa of dividend smaller then the mantissa of divisor-------0.1234 > 0.1100 So 0 .1234E99= 0.0123E100
Step.2 Divide .01230 / 0.1100= 0.1118
Step-3 subtract the exponent-----E100 - E01 =E99
So answer is-- -0.1118E99

There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed repeatedly. In practice, the way these operations are carried out in digital logic can be quite complex (see Booth's multiplication algorithm and digital division). For a fast, simple method, see the Horner method.