Friday, January 06, 2006

Software Cost Estimation Methods

One of the critical functions to be performed, especially at the begining of the software project, is Cost Estimation. Software Cost Estimation can be done in many ways and there had a been a lot of research on this topic. Following are most used methods for software cost estimation:

1. The Function Point Method

The Function Point Method (FPM) was developed by IBM. It is a combination of the analog method (the FP curve is based on the empirical values from similar projects) and weighting method (evaluation of the influence factors for the project). The effort estimation takes complexity, certain software characteristics, and productivity into consideration. The Function Points (FP) are applied as a measuring means for the productivity used to specify the effort.

Following are the steps taken for Cost Estimation using Function Points approach:
  • Determination of the Function Points: In order to determine the Function Points, the functions are defined and classified, according to external input, external output, logical internal file, external interface file, and external inquiry (various types of transactions). After that, these transactions are classified according to the degree of their complexity (simple, average, complex). Based on the type and the degree of complexity, the transactions are weighted. The addition of these weights of all transactions results gives the unadjusted Function Points.

  • Determination of the adjusted Function Points: Influence factors like interlacing with other projects, decentralized data management, transaction rate, complex processing, reusability, conversions, user friendliness are taken into consideration and evaluated with regard to the influence they are having on the project. The degree of influence is calculated from the sum of the influence factors. Multiplying the unadjusted function points with the degree of influence results in the adjusted function points. As compared to the (unadjusted) function points, these can be increased or reduced by up to 30 %.

  • Determination of the Effort: Based on a productivity curve/table, the adjusted function points are converted into man-months. To realize this, an "IBM curve" is available which should be successively replaced by individual empirical values from post-calculations.
1.1 Application of Function Point Method
  • Commercial Application: FPM is particularly suited for Information Systems developments; the effort estimation for data core system developments (small data volume, processing-oriented) is not sufficiently exact. This is due to the classification of the functions on the basis of the data to be processed, the processing complexity is globally evaluated as an influence factor across all functions.

  • Development Project: The FPM procedure is not suited for SWMM projects. Normally, the basis for the estimation and the influence factors in an SWMM project have not been noticeably changed as compared to the ones in the development project. On the other hand, however, SWMM-based measuring quantities (like "additional input data") and influence factors (e. g. "personnel continuity") are lacking.

  • Function-Oriented Engineering: Function-oriented engineering is no requirement for the FPM application. However, this type of methodical development facilitates the application of the FPM since it already covers the step to determine the transactions.
2. Estimation by Analogy

An analogy is a technique used to estimate a cost based on historical data for an analogous system or subsystem. In this technique, a currently fielded system, similar in design and operation to the proposed system, is used as a basis for the analogy. The cost of the proposed system is then estimated by adjusting the historical cost of the current system to account for differences (between the proposed and current systems). Such adjustments can be made through the use of factors (sometimes called scaling parameters) that represent differences in size, performance, technology, and/or complexity. Adjustment factors based on quantitative data are usually preferable to adjustment factors based on judgments from subject-matter experts.

The major caution of the analogy based estimation method is that it is basically a judgment process and, as a consequence, requires a considerable amount of expertise if it is to be done successfully. There are two types of analogues that may be used. One is based upon similar products/services and the other upon similar concepts.

In estimating software costs using analogy, first we have to determine how best to describe projects. Possibilities include the type of application domain, the number of inputs, the number of distinct entities referenced, the number of screens and so forth. The choice of variables must be restricted to information that is available at the point that the prediction is required. For this reason LOC is generally unsatisfactory as it must be estimated. The choice of variables is flexible, although one will wish to choose variables to characterise the project as accurately as possible. It is also important to choose at least one variable that acts as a size driver, for instance number of inputs or screens or classes. Analogies are found by measuring Euclidean distance in n-dimensional space where each dimension corresponds to a variable. Values are standardised so that each dimension contributes equal weight to the process of finding analogies.

3. Algorthmic Modelling Method of Estimation

3.1 Basic COCOMO

This is a simple on-line cost model for estimating the number of person-months required to develop software. The model also estimates the development schedule in months and produces an effort and schedule distribution by major phases. This model is based on Barry Boehm's Constructive Cost Model (COCOMO).

The model estimates cost using one of three different development modes: organic, semidetached and embedded. The different modes are discussed in detail below:
  • Organic: In the organic mode, relatively small software teams develop software in a highly familiar, in-house environment. Most people connected with the project have extensive experience in working with related systems within the organization, and have a thorough understanding of how the system under development will contribute to the organizations objectives. Very few organic-mode projects have developed products with more than 50 thousand delivered source instructions (KDSI).
  • Semidetached: The semidetached mode of software development represents an intermediate stage between the organic and embedded modes. "Intermediate" may mean either of two things:

    • An intermediate level of project characteristic.

    • A mixture of the organic and embedded mode characteristics.

    The size range of a semidetached mode product generally extends up to 300 KDSI.

  • Embedded: The major distinguishing factor of an embedded-mode software project is a need to operate within tight constraints. The product must operate within (is embedded in) a strongly coupled complex of hardware, software, regulations, and operational procedures, such as an electronic funds transfer system or an air traffic control system.
3.2 COCOMO II

COCOMO II is a model that allows one to estimate the cost, effort, and schedule when planning a new software development activity. It consists of three submodels, each one offering increased fidelity the further along one is in the project planning and design process. Listed in increasing fidelity, these submodels are called the Applications Composition, Early Design, and Post-architecture models. Until recently, only the last and most detailed submodel, Post-architecture, had been implemented in a calibrated software tool. COCOMO can be studied in greater detail here and a process for calibrating Post-architecture model can be found here

4. Top-down Cost Estimation Method(TCE)

A standard top-down cost estimation process typically consists of the following steps:
  • Searching a software functional classification table for the same type of software being developed, with matching functions, such as a word processor, and identifying the standard cost for that type of software

  • Adjusting the standard cost by considering the developer's business strategy such as "the top priority maintaining quality."

  • Re-adjusting the above adjusted standard cost by considering the development environment (such as the ability of the programmers or the availabilty of hardware and software tools).
4.1 Assumptions
  • Each type of software has its own intrinsic characteristics: such as functional complexity, performance requirements and sophistication level of the user interface.

  • The software development costs and worker hours are both affected by software characteristics, corporate strategy, adn the available development environment.
4.2 Implementation

Implementing and evaluating an operable TCE system must go through four phases. The four phases are bnriefly explained below:

4.2.1 Phase 1 (Construct a software taxonomy table)

In phase 1, we make a software taxonomy table that covers all software products. Of course, there are various way to classify software, some of which are detailed below:
  • Operating systems: job management, data management, task management, device drivers.

  • System utilities: security, file management, library management.

  • Network: Internet, client/server system, dataware, groupware, network management, distributed object environment, network protocols, infrastructures.

  • Language Processors: COBOL, C/C++, FORTRAN, Java, documentation languages(e.g, SGML).

  • Database: tree structure, network structure, realational database, distributed databases.

  • PC-related standard software: word processor, spreadsheet,

  • Applictions: banking and securities system, reservation system, financial system, inventory control system, electronic commerce application, etc.
4.2.2 Phase2(Construct a standard cost table)

In phase 2, we provide the following information for each type of software:
  • Standard cost

  • Weightings to correspond to emphasized goals such as "performance is not a major consideration," or "critical".

  • Weightings to correspond to emphasized GUI goals such as "a simple GUI is enough," or "a meticulously designed is essential."
4.2.3 Phase 3(Develop adjusting procedures)

In phase 3, we provide weightings for reflection corporate strategic characteristics, and then provide weightings that reflect the environmental characteristics.

4.2.4 Phase 4(Perform experimental evaluation of the TCE)

In phase 4, we evaluate the predictability and sensitivity of the TCE.

Depending on the type of software being developed, any of the above four methods can be applied. Cost estimation helps in ensuring that projects are optimized for costs, and helps in proper monitoring of projects based on the actual effort and the expected effort, in effect ensuring software quality, which is better known as software quality assurance.

Thursday, January 05, 2006

Key Quality Concepts

Importance of Quality

After a thorough study of Quality in general and Software Quality in particular and scrutinizing The Definitions, we can arrive at the following conclusions about Software Quality and its importance in an organization's success.
  • Quality is conformance to product requirements and should be free.

  • Quality is achieved through prevention of defects.

  • Quality control is aimed at finding problems as early as possible and fixing them.

  • Doing things right the first time is the performance standard which results in zero defects and saves the expenses of doing things over.

  • The expense of quality is nonconformance to product requirements

  • Quality is what distinguishes a good company from a great one.

  • Quality is meeting or exceeding our customer's needs and requirements.

  • Software Quality is measureable.

  • Quality is continuous improvement.

  • The quality of a software product comes from the quality of the process used to create it. Hence the concept of Software Quality Assurance

  • Quality is the Entire Company's Business

Measuring Quality

One of the important things that needs to be ensured in Software Projects is that Software Quality should be measurable. The simple reason behind it is that if we cannot measure, we cannot manage. Following are the grounds based on which, quality can be measured:
  • Lines of code (LOC)

  • Quality
    • Defects / 1000 LOC
    • Mean time between failures

  • Effort (person-months)

  • Staff turnover: The number of employee departures in the last year divided by the number of staff members employed over the last year expressed as a percentage

  • Product metrics

    • Size: The final size or complexity of the software
    • Reliability: The time testedness of the developed software

  • Process metrics

    • Efficiency of fault detection
    • Ratio of faults detected during development to the total number of faults detected over the lifetime of the product

  • Fundamental metrics

    • Size (lines of code)
    • Cost (in dollars)
    • Duration (in months)
    • Effort (in person-months)
    • Quality (number of faults detected)
Quality Assurance

Quality Assurance, as discussed earlier, is process oriented, i.e. it is more concerned doing the right things than doing the things right. Any QA activity can be divided into three specific steps:
  • Analysis: Analyzing the process for it's pros and cons, based on a management approved procedure

  • Auditing: Noting down the results of the analysis on a regular basis

  • Reporting: Passing on the well documented QA reports to the higher management, so that it can take a proper initiative, build up on weaknesses while maintaining the strengths.
Following are the key Quality Assurance activities:
  • Process Definition & Standards

  • Formal Technical Reviews

  • Analysis & Reporting

  • Measurement

  • Test Planning & Review
Following are the goals of Quality Assurance
  • Provide management with the data necessary to be informed about product quality

  • Make confidence and be sure that product quality is meeting its goals
Quality Costs
  • Prevention costs

    • Quality planning
    • Formal Technical Reviews
    • Test equipment
    • Training

  • Appraisal costs

    • In-process and inter-process inspection
    • Equipment calibration and maintenance
    • Testing

  • Failure costs

    • Internal failure costs
      • Rework
      • Repair
      • Failure mode analysis

    • External failure costs
      • Complaint resolution
      • Product return and replacement
      • Help line support
      • Warranty work
The following diagram gives an overview of the contribution of various quality costs to the total quality cost depending on the quality level. Based on this, the optimal level of quality to minimize the quality cost is made.

Figure: Qualitative Variation of Quality Costs

Wednesday, January 04, 2006

QA, QC and Software Testing

Quality Control, Quality Assurance and Testing are important, closely related, yet different concepts. There is a thin line that seperates these three methods of ensuring software quality. But all these three are useful to manage risks of developing and managing software.

Quality Assurance refers to the process used to create the deliverables, and can be performed by a manager, client, or even a third-party reviewer. Examples of quality assurance include process checklists and project audits.Quality Assurance is process oriented. A QA review would focus on the process elements of a project - e.g., "are requirements being defined at the proper level of detail?".

Quality Control refers to quality related activities associated with the creation of project deliverables. Quality control is used to verify that deliverables are of acceptable quality to the client and that they are complete and correct. Examples of quality control activities include deliverable peer reviews and the testing process.Quality Control is Product or Service specific. QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity, but there can be other QC activities such as inspections and reviews.

Software testing is a process used to identify the correctness, completeness and quality of developed computer software. Actually, testing can never establish the correctness of computer software, as this can only be done by formal verification (and only when there is no mistake in the formal verification process). It can only find defects, not prove that there are none. The purpose of testing is to ensure that the users can be confident that their new systems will work as specified. Consequently, testing is being recognised as an increasingly important part of the development process that is essential for success. Hence we see that Software Testing is an example of Quality Control activity. Software Testing is generally done at the code level, and is more technical than managerial.

An Example

Suppose a project manager asked the client to approve the Business Requirements Report. The approval of the Business Requrements Report can be done is two ways.

One solution would be to actually review the document and the business requirements. This is an example of a quality control activity, since it is based on validating the deliverable itself.

Instead, if the client asks the project manager to describe the process used to create the document.A typical process will be like this:
  • Gathering client requirements in a client group meeting.

  • Documenting the requirements and asking the group for their feedback, modifications, etc.

  • Taking the updated requirements to representatives from different groups to add requirements to support company standards.

  • Reviewing the final document with the client
If an approval is made on the basis of the above criteria, then it is an example of QA activity.

Hence, while Quality control activities are focused on the deliverable itself. Quality assurance activities are focused on the process used to create the deliverable. They are both powerful techniques and both must be performed to ensure that the deliverables meet your customers quality requirements.

There may be questions about the amount of QA/QC activities that need to be done in an organization. Also, there may be a conflict between the focus on QC and QA, in organizations. A good balance can be achieved by carrying on the QA/QC activities as mentioned in the below guidelines:
  • While line management should have the primary responsibility for implementing the appropriate QA, QC and testing activities on a project, an external QA function can provide valuable expertise and perspective.

  • The amount of external QA/QC should be a function of the project risk and the process maturity of an organization. As organizations mature, management and staff will implement the proper QA and QC approaches by default. When this happens only minimal external guidance and review are needed.
Hence, we conclude that Quality Assurance and Quality Control are very essential to ensure good quality of software, and need to be included in the default practices in organizations developing software products and services. Software Testing is just a special case of Quality Control, and tries to identify defects in the developed software, before they are delivered to the final end user. The Software Quality Philosophy in the light of QA, QC and Software Testing can be summed up in the following quote by Edward R. Murrow:

To be persuasive we must be believable --- QA
To be believable we must be credible --- QC
To be credible we must be truthful --- Software Testing

Tuesday, January 03, 2006

Software Quality - An Overview

Software quality is a field of study and practice that describes the desirable attributes of software products.

There are two basic apporoaches to Software Quality:

Defect Management Based Approach

A software defect can be regarded as any failure to address the end-user's requirements. Common defects include missed or misunderstood requirements and errors in design, functional logic, data relationships, process timing, validity checking, coding, etc.

The defect management approach is based on counting and managing defects. Defects are commonly categorized by severity, and the numbers in each category are used for planning. More mature software development organizations use tools such as defect leakage matrices (for counting the numbers of defects that pass through development phases prior to detection) and control charts to measure and improve development process capability.

Quality Attributes Approach

This approach to software quality is best exemplified by fixed quality models, such as ISO/IEC 9126. This standard describes a hierarchy of six quality characteristics, each composed of sub-characteristics:

  • Functionality: This attributes determines the ability of the software to confirm to the functional requirements of the end-user. In short, the functionality of a software is directly related to the satisfaction of the client, when he first see's the software product/service.

  • Reliability: The probability that software will not cause the failure of a system for a specified time under specified conditions. The probability is a function of the inputs to and use of the system in the software. The inputs to the system determine whether existing faults, if any, are encountered. (2) The ability of a program to perform its required functions accurately and reproducibly under stated conditions for a specified period of time.

  • Usability: Software Usability usually refers to the elegance and clarity with which the user interface of a software. Complex computer systems are finding their way into everyday life, and at the same time the market is becoming saturated with competing brands. This has lead to usability becoming more popular and widely recognised in recent years as companies see the benefits of researching and developing their products with user-oriented instead of technology-oriented methods.

  • Efficiency: Efficiency in software perspective is the effectiveness of the present systems and services in place in software development. Efficient Software Systems provides innovative software solutions for business process management. Companies can improve their performance by solving work coordination and communication problems through software systems and services.

  • Maintainability: Maintainability is the ease of maintainance of a software once it is installed. For example, an Application designed in an Object Oriented Programming language is more maintainable than an Application written in a legacy programming language.

  • Portability: Portability is the ease with which a software can be distributed across various Operating Systems, Hardware Profiles, Web Browsers etc. For example a Java based application has greater portability than a .NET based application, since the former can run across various operating system, while the latter can run only on a Microsoft Windows based operating system



Though a fixed software quality model is often helpful for considering an overall understanding of software quality, in practice the relative importance of particular software characteristics typically depends on software domain, product type and intended usage. Thus, software characteristics should be defined for, and used to guide the development of, each product.

Quality function deployment provides a process for developing products based on characteristics derived from user needs.

In conclusion, software quality is the set of characteristics of a product which can be assigned to requirements. In addition to this, we also need to take in account, the characteristics that are not related to requirements: Characteristics, which reduce the software quality (contra-productive characterisics) and neutral characteristics, which are not relevant for quality. It is clear that not only the presence of characteristics is important, but also the absence of these contra-productive characteristics is also required for a software product to have a good quality.

The Definitions

Software Assurance: The planned and systematic set of activities that ensure that software life cycle processes and products conform to requirements, standards, and procedures [IEEE 610.12 IEEE Standard Glossary of Software Engineering Terminology]. For NASA this includes the disciplines of Software Quality (functions of Software Quality Engineering, Software Quality Assurance, Software Quality Control), Software Safety, Software Reliability and Software Verification and Validation and Independent Verification and Validation.

Software Quality: The discipline of software quality is a planned and systematic set of activities to ensure quality is built into the software. It consists of software quality assurance, software quality control, and software quality engineering. As an attribute, software quality is (1) the degree to which a system, component, or process meets specified requirements. (2) The degree to which a system, component, or process meets customer or user needs or expectations [IEEE 610.12 IEEE Standard Glossary of Software Engineering Terminology].

Software Quality Assurance: The function of software quality that assures that the standards, processes, and procedures are appropriate for the project and are correctly implemented.

Software Quality Engineering: The function of software quality that assures that quality is built into the software by performing analyses, trade studies, and investigations on the requirements, design, code and verification processes and results to assure that reliability, maintainability, and other quality factors are met.

Software Reliability: The discipline of software assurance that 1) defines the requirements for software controlled system fault/failure detection, isolation, and recovery; 2) reviews the software development processes and products for software error prevention and/ or controlled change to reduced functionality states; and 3) defines the process for measuring and analyzing defects and defines/ derives the reliability and maintainability factors.


Software Safety:
The discipline of software assurance that is a systematic approach to identifying, analyzing, tracking, mitigating and controlling software hazards and hazardous functions (data and commands) to ensure safe operation within a system.

Verification: Confirmation by examination and provision of objective evidence that specified requirements have been fulfilled [ISO/IEC 12207, Software life cycle processes]. In other words, verification ensures that “you built it right”.

Validation: Confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use are fulfilled [ISO/IEC 12207, Software life cycle processes.] In other words, validation ensures that “you built the right thing”.

Independent Verification and Validation (IV&V): Verification and validation performed by an organization that is technically, managerially, and financially independent. IV&V, as a part of Software Assurance, plays a role in the overall NASA software risk mitigation strategy applied throughout the life cycle, to improve the safety and quality of software.