Click here to Skip to main content
15,918,967 members
Please Sign up or sign in to vote.
5.00/5 (1 vote)
See more:
What is LTRANS() Function Used for in Silverlight
Pls Help.
Posted

1 solution

What is JUnit?
One of the first questions that arises for new Java developers looking to test software projects is what is JUnit? JUnit is a standard software testing technique in the field of Java programming. It is a test driven development framework which helps performing unit tests for a software system.
Why JUnit Testing Is Important?
Like every other unit test frameworks, JUnit helps to find out bugs in a Java code snippet or any other indivisible software component. JUnit ensures that every logical component of the system is working with optimal performance. Thus it focuses only on individual modules and doesn’t pay any attention to the connections between these entities.
JUnit is a regression testing technique for Java based software systems. A regression testing framework detects errors arising out as a result of code modification. This may be necessary for adding a new functional requirement or upgrading the system for the purpose of making a new stable release.
Without using JUnit the programmer will have to use the “println()” function to output faults on the console. With JUnit this is mostly automated. JUnit aggregates the results of the tests performed in a well structured manner. Without JUnit this might be almost impossible and may result into mess of test data, decreasing productivity and increasing costs.
With JUnit the programmer has the advantage to repeat the test designs. This helps in quick generation of tests and thus the process of writing code, in a test driven development.
How JUnit Testing is performed?
JUnit suggest test driven software development. In test driven development, the programmer first designs test cases and later on writes the code for the required business process. When the test case is first run, it shows high number of faults. The programmer tries to eliminate these errors one after the other until the code is bug free. Once this is done, the programmer tunes up the code in terms of quality and design. Same approach is used to develop the entire software. Kent Beck was acknowledged for rediscovering the test driven development. This technique is often called as “test a little, code a little, test a little, code a little” technique. This approach greatly reduces the programmers stress due to debugging and increases the productivity.
Advantages of Testing with JUnit:
1. Quick and easy generation of test cases and test data.
2. Ability to reuse the older test cases as well as the test data for making a new test case.
3. Generate test cases which stick on to their previous test values.
4. Promote TDD, i.e. Test Driven Development
5. Enhance productivity and reduce production cost.
6. Excellent and highly comprehensive reporting technique.
7. Easy contrast between expected output and the output displayed on the console.
8. Logical aggregation of test cases.
9. Add the JUnit.jar package in the class path environment variable of you system.
10. Make a subclass of “TestCase”.
11. In the subclass, define the actual test method you want to add. You may add multiple test methods.
12. In the above methods write the assert methods.
13. Test methods are declared public and have a void return type because they do not actually return a value after their execution.
14. They have standard naming system with the name string pattern as “testXYZ”.
15. XYZ in the naming pattern is the generic name for the target class on which the test is to be executed.
16. Test methods normally do not include any arguments.
Writing the JUnit Code for Testing:
How to Make JUnit Test Methods?
How JUnit Tests Should Be Prepared?
Build test around those units which can possibly damage the operation flow. These are often message passing, mathematical expressions, database connectivity errors etc. Redundant and useless tests should be avoided so that the system is not unnecessarily overloaded. “Setter/Getter” methods also should not be tested.

How to Become a Software Engineer
A common question that arises with aspiring computer science and other students is how to become a software engineer? Software engineering is a highly regarded engineering field closely associated with discrete mathematics, mathematical logic, programming languages and optimization theory. Today software engineering is of extreme importance, it being extensively used to develop operating systems, architecture of the internet, browsers, embedded components etc. Software engineers are hired around the globe, in any sort of industry. Be it medical, architecture, electronics, defense, business, hospitality or e-commerce, software engineers are considered top industry personnel affecting the future course of the industry.
What are the Software Engineering Fields?
People aspiring to become software engineers should have special interests in developing logic, finding algorithms for some task, programming of logic etc. Software engineering is a vast field including many specific sub-areas of study such as:
Software Programmer: A software programmer understands the requirements of the software to be developed, reckons the programming languages to be used, algorithms and coding techniques to be applied. In simple terms, he codes the business logic of the software to be developed. He usually works in programming languages such as C, Java, PHP, Python etc.
Software Developer: A software developer generates the deployment model, class hierarchies, packages, modules, interfaces, UMLs etc. required in the development phase of the software.
Web Designer: A web designer essentially develops the user interfaces and other elements visible to the end user. He works with scripting and markup languages such as HTML, Javascript, CSS etc. He is also skilled in image and animation development tools such as Flash, Photoshop, CorelDraw etc. A web designer optimizes the graphical user interface to increase readability and ease of use.
Software Researcher: A person in research and development department of a company has the goal of optimizing the various software development methodologies used by his company. He has a highly esoteric knowledge of theoretical computer science and related discrete mathematics. He aspires to develop better algorithms and software models.
There are other software industry personnel such as software testing experts, software analysts, technical writers etc.
What is Required to Start a Career in Software Engineering?
To start a career in software engineering, first of all a formal bachelor’s degree any related field such as Information Technology, Computer Science, Mathematics etc. is of threshold importance. Standard degree titles are Bachelor of Technology, Bachelor of Science, Bachelor of Computer Applications etc.
Bachelor Degree:
A bachelor’s degree in a field such as CSE, IT, electronics, telecom or any other related field from a reputed college recognized by a major accreditation institute is the most common way to enter the field of software engineering. Most such degrees are largely common in their academic syllabus apart from a few specialized courses.
Specialized Certification:
Many private educational firms offer specialized courses in Java, SQL, PHP, Embedded, Cloud Computing, UNIX etc. With such certification one can attract some job opportunities as a software engineer.
Certified Examinations:
Large companies which innovate and develop major technologies also have various educational programs, primarily to increase the employee workforce skilled in these technologies. Sun Certified Professionals is a well known certified program which offers many courses such as Sun Certified Java Programmer, Sun Certified Mobile Application Developer etc. After successful completion of a course or a related exam, one can easily get employed in a firm working with such technology.
When Should One Start Learning to Be a Software Engineer?
If one yearns to become a high end, top notch software engineer then one should start developing related interests very early in life. Enthusiasts often start programming and flowchart building right from their middle school. They start with languages such as BASIC, C, or Java. Regular coding and logic building tune their mind towards strong programming strategies and quick development of algorithms. In their high school they opt for mathematics and take admission into some reputed university offering degrees in software engineering or computer science. In college they learn industry standards of software designing, design and analysis of algorithms, system programming, object orientated languages etc. Upon completion of their college program they get recruited into some of the best multi-national, multi-billion dollar companies and can earn nice salary packages.
What if I Didn’t Start Programming When I was 2 Years Old?
All is not lost. Many adults in Generations X and Y have found themselves working in two or more major career fields over the course of their adult working life. The key to making a transition to a career in software engineering is two-pronged: 1 – Make time while working in your current job to obtain at least minimum certifications in a software engineering related discipline, and 2 – Build your personal savings to maintain your personal quality of life while filling an entry-level software engineering position. The second key is one that is often overlooked when potential software engineers read about high salaries being paid to those experienced in the field. Similar to other industries, the better paying positions will be awarded to those who have a track-record of proven performance which may take some time based on individual and economic circumstances.

What is UML?
A common question that arises for new software developers or computer science students when undergoing training is what is UML? UML is an acronym for Unified Modeling Language. It is a standard graphical modeling language used to capture various aspects of a software engineering project. UML specifications have been documented by the Object Management Group. Created in mid 1990s, today UML is extensively used in almost all middle to large software development projects. Since its inception, UML has been continuously evolving and coming out with newer versions. UML 2.4.1 was officially released in 2011. Many UML development tools are available for quick creation of project models. Some of them are Umbrello, Eclipse and MagicDraw.
Why Visual Modeling Is Important?
For developing large enterprise software the developer team cannot shoot up with writing code. Once the client has comprehensively discussed the functional requirements of the software, the developers should be extremely clear with the modules, classes, interfaces, data and work flows involved in the project. For small projects this is often done by listing and describing these aspects. But with large and complex software a huge number of such essential aspects make it very difficult to give a clear understanding of them, by merely listing them out. This is where visual models come to play. Visual models ensure optimized number and scope of modules, correct interfacing between modules and capture of all of the user’s requirements.
Introduction To UML
We already know that visual modeling is essential for any middle to large software project. The most important feature about any visual model you create is that it should be a standard model which can clearly convey the idea to every member of the project cohort. UML has been there for more than a decade. UML 2.0 has thirteen recognized diagrams for representing any enterprise application. These diagrams have been classified into three categories namely – Structure, Behavior and Interaction diagrams.
Structure Diagrams in UML
They represent the entities present in the software system. These may be components, modules, packages, classes, instances etc. Component diagrams represent the logical partitions of the system into modules and how they relate with each other. Class diagrams depict the attributes and functions of the classes present. Object diagrams show the attribute values of class instances. Package diagrams represent how the classes have been bound together. A deployment diagram shows the terminals, database servers, web servers, browsers etc. and their interconnectedness. It may also include language specific software components such as Java beans, JDBC, Data Access Objects etc.
Behavior Diagrams in UML
They essentially include diagrams which depict the operational flow of various processes present in the system. These may be Use Case

UML Use Case Diagram
Diagrams (UCAD), Activity Diagrams or State Machines. Use Case and Activity diagrams generally visualize the workflows associated with system processes. The Use Case diagrams project how users of each kind interact with the system. ‘Actors’ is often the aggregated term used for users and other external components which can serve input. State Machines visualize the user inputs and the corresponding state transitions where a state is a possible situation the system may land into.
Interaction Diagrams in UML
As pronounced by the name itself, interaction diagrams show how different components connect with each other. These diagrams focus on interfaces between two entities and the flow of data among them. They may also keep a track of time spent in performing operations with the help of graphs. Communication diagram shows interaction of an object with every other object in terms of message exchanges. Sequence diagrams represent the occurrences of interactions in a chronological order.
Standard Colors in UML
Usage of standard colors in UML diagrams makes even intricate models well readable. Usually four light shades such as yellow, pink, light blue, light green are used. Same colored shapes have same purpose. For example, every yellow shape in a model may represent a user or some other input agency.
What is Extreme Programming?
Extreme programming (XP) is a highly specialized programming technique towards software development. It complies with certain defined standards in the various stages of the software development cycle and the code of conduct for members of the developer team. This approach was designed by Kent Beck in 1990s. XP is often used as an acronym for Extreme Programming. It is in tune with agile software development.
Why Should You Adopt Extreme Programming?
Extreme programming guarantees a supreme quality software product as well as rich software development experience for the developer team. It keeps a track of how well the software product satisfies the needs of customers. The approach aspires to keep the development phase highly productive and cost effective.
Extreme Programming Tasks
1. Coding: Code is at the heart of any software. Better the code, better is the responsiveness to input. Coding can be in Java, C, PHP etc. depending on the purpose, user preferences, programmer preferences, existing legacy code etc. Coding might or might not follow any software designs such as Object Oriented Programming, Code Refactoring, Iterative and Incremental Development, Waterfall Design etc.
2. Testing: First things first. Testing is not a necessity in mere output generation. But unless one is dead sure of clean and correct

Extreme Programming Feedback Loop
code, testing becomes a crucial phase in every software engineering task. Various testing models used in extreme programming are Unit Testing, Black-box Testing, Regression Testing etc.
3. Listening: In this phase programmers are required to deeply understand the user need either face to face or through documents. Care must be taken to provide all the functional and non functional customer requirements.
4. Designing: Again, output can be generated with any designing strategy. But for quality assurance, extreme engineering embraces many software engineering designs to guide the programmers to write the quality code. Proper designing guarantees minimal functional dependencies, zero code redundancy, modularity, code maintainability and software extensibility.
5. Simplicity: Extreme programming suggests simplicity in every phase of software development process. Being simple is actually not simple enough that it can be ignored. One way to stick to simplicity is focusing on the essential components rather than providing added functionality which can be done at a later stage.
Values in Extreme Programming
1. Communication: Misunderstanding and communication gap between the clients and developer team can ruin the entire project. The developers should get a crystal clear idea of all the project elements and how they interface with each other. Professionally, this is done through face to face discussion with the end users and also by using well documented user requirements.
2. Feedback: Even after getting all the code and design correct, a project cannot receive a green signal unless it gives the desired result and satisfies the customer. This is done through a feedback process from the software before delivery, from all the developers and from the customer.
3. Courage: Courage for developers is the courage to chuck away the code even if they are in the middle of the project. A flaw may even require you to change the complete architecture of the software and start all over again.
Extreme Programming Practices
Extreme programming follows certain well experimented and widely accepted software development practices such as Refactoring, Pair Programming, Planning Game, Metaphor, Customer Onsite etc.
Pair Programming suggests two programmers to work on a terminal. Both first discuss and then one types the code and other keeps a bird’s eye view of the code written so far.
Refactoring is the task of internally upgrading already written pieces of code without affecting their output or functionality.
System Metaphor is an easy description of the project elements and workflows understood by everybody related to the project.

What is Regression Testing?
A common question that arises in software engineering that arises is what is regression testing? This term refers to any variant of test program designed to find new errors, aka regressions, that arise in software functionality after changes have been made to the application or system. Regression testing is commonly conducted after bug fixes, configuration changes, patches, or functional enhancements have been made to software.
What is the Purpose of Regression Testing?
The primary purpose of regressing testing is to ensure that in the effort to “fix” bugs or add “functionality” to software that it is not broken worse than it was before. Or more specifically, the aim of regression testing is to uncover new regressions, or errors in existing functionality after new changes have been made to the system.
Background Behind Regression Testing
Over time, major software developers have found that as software systems or applications are fixed, new faults or old bugs that have been previously corrected re-emerge. Many times, when an old bug comes back it is due to lack of or poor revision control practices. Even on very disciplined project teams, these errors can come back through simple human error in the revision control process. Other times, basic bug fixes will be considered “fragile.” This is where they are solved for one or two narrow cases but don’t hold-up to general testing or widespread use. As a result, it has become considered “good coding practice” when a software bug is discovered that a corresponding test is created and regularly retested to guard against the issue arising again.
Many projects will now setup automated testing systems that will automatically re-run all regression tests against the code-base at specific intervals and provided automated reports to the project team. Other project teams prefer to run the tests manually after every successful build or compile of the code-base, nightly, or once a week. Most of the time, an external tool such as Tinderbox, Hudson, or Jenkins is used to help run and track the results of the regression testing. In more corporate development environments, a separate quality assurance team or sub-contractor will handle the testing requirements for the project. In order to help address the high-cost of bug-fixes after development is complete, many project teams have begun adopting smaller-scale unit testing while in the development stage that helps feed the regression testing that occurs later.
Regression Testing Strategies
There are a number of methods of regression testing used throughout industry. The most common include rerunning tests that have been previously developed to see what if any program behavior has changed. The tests also check to see if previously fixed faults have re-emerged. It can also be used to systematically choose the minimum number of tests to adequately cover specific changes to the system. Specific considerations to include in any regression test plan development include:
1 – To test fixed bugs as soon as practicable. The initial fix may have fixed the symptoms of the issue but not the bigger-picture issue or cause.
2 – Look hard for the side effects from “Bug Fixes” when developing regression tests.
3 – Every bug that is found and fixed should have its own regression test developed.
4 – If there are two or more tests that are similar in nature, take a hard look at being able to eliminate the less effective test if you can.
5 – Archive tests that are consistently passed by the application or system. Run on less frequent intervals or on a causal basis.
6 – Trace effects of programmatic changes on program memory.
7 – Make significant changes at the boundaries of data and program inputs. Seek out any corruption that results in the system.
8 – Don’t focus on design issues. Emphasize functional problems and capability in the testing.
How Do You Build a Regression Test Library?
The time-proven method to build a regression test library is to develop a library of tests that can be run every time a new version or build of the software project is developed. One of the most challenging aspects to creating a library is figuring out what test cases to include and to err on the side of caution when making decisions of what test cases to include. Those which test boundary conditions and timing for both system and user-generated input should be included. Some project teams will only include tests which find bugs; however, this doesn’t take into account past bugs which may have been fixed a number of test iterations previously.
Additionally, the regression test library should be reviewed at periodic intervals in order to help reduce tests that are redundant. Conventional wisdom in software engineering circles hold this frequency to be about once every three or four test cycles. When more than one person is writing test code or cases, it is pretty common for redundant tests to be added to the regression test library.

SDLC Test Phases
SDLC or the Software Development Life Cycle, is the process or framework of tasks required to develop a system. Although the methodology is commonly used for software development projects, it can also be extended to the creation of other complex systems. It tracks the project from the initial concept through a post-implementation analysis and maintenance phase. A commonly overlooked area of SDLC with new engineers is understanding the SDLC test phases and resulting deliverables from each phase of the testing process.
What Are the SDLC Phases?
SDLC is broken up into phases that may vary based on the variant of SDLC being implemented on the project team. Although the SDLC test phases are not part of the higher-level project phase model from phase 1 to the end of the project, the testing phase adopts deliverables produced in earlier portions of the SDLC life cycle in order to accomplish the goals of the phase.
Phase 1 – Requirements Gathering and Analysis
Phase 2 – System Design
Phase 3 – System Development
Phase 4 – System Testing
Phase 5 – Operations and Maintenance
SDLC Testing Phase
The SDLC Testing phase is notionally designed to be carried out after system development is complete. The testing phase measures the actual versus expected outcome of the system. Unlike quality control measures which are designed to evaluate a developed work product and include audits to assess cost of correcting defects, the goal of testing is to find defects through the execution of the system or software package.
What is the SDLC Testing Life Cycle?
SDLC testing isn’t just accomplished through ad-hoc means. The testing process has its own life cycle to include: Test Analysis, Test Planning, Test Design, and Test Execution.
SDLC Test Analysis Phase – In the SDLC test analysis phase, the testing team has to gain an intimate understanding of the project’s requirements. All insights gained during this phase help design the testing suite.
SDLC Test Design Phase – During the design phase, test cases are created based on the stated project requirements and use cases. This phase is commonly short-changed by project teams who fall under time or budget constraints, but is one of the most critical in the testing life cycle.
SDLC Test Execution Phase – The testing team runs or executes the test cases against the software or system. The results are recorded and then measured against the expect results. Shortcomings can be used to identify either corrective or future work based on the project team’s goals and requirements.
What Are the Types of SDLC Testing?
Depending on the organization and software engineering model (or variant) being employed, there are a number of testing terms, types, and definitions that you will run across. Some of the most common include:
Acceptance Testing – This type of testing can be thought of as equivalent to a “mid-term” or “final” exam. The goal of the acceptance test is to confirm the system or software meets the customer defined requirements and is ready for a major milestone to be released.
Alpha Testing – Conducted after the majority of the software functionality is complete but before end-users are going to be involved. Typically accomplished by part of the project team but can be outsourced. Accomplished in close coordination with the project team.
Beta Testing – Conducted after project code is complete. For commercial software projects is commonly distributed to the public for free and to generate buzz for the final product.
Black Box Testing – Tests conducted without any knowledge of the code architecture, language, or structure. Requires explicit requirements definition or specification documents to be carried out.
Functional Testing – This type of testing will take two or more modules and seek out defects when they conduct their intended work. Ultimate goal is to ensure the module performs functions as laid out in the project specification.
Independent Verification and Validation (IV&V) - The system or software is exercised in order to ensure it meets user expectations and project requirements. The testing organization or group should not be part of the software development team to ensure the test results are impartial.
Load Testing – Load testing helps determine how well the product handles heavy demand for system resources. This can be in terms of heavy website traffic, CPU, or memory utilization.
Performance Testing – Uses automated tools that are designed to test and tweak system performance. Measures how quickly a system can receive a given set of inputs or events.
Regression Testing – Checks to see if bug fixes have been implemented successfully. Also checks for the presence of new bugs or flaws that could have been created from correcting the original errors and ensures no baseline functionality has been lost.
Security Testing – More and more common in system or software testing circles. Consists of testing the network and database software to guard against accidental miss-use, hackers, or known computer malware attack.
System Integration Testing – Typically conducted when integrating a commercial off the shelf (COTS) system into a custom or unique project.
What is a SDLC Test Plan?
SDLC project test plans describe the overall objectives, approach, scope, and focus of the software testing effort. Typically, the larger the project, the more formally documented the testing plan. No matter what the size of the test plan; however, the process of preparing the document helps the team think through the required steps to validate the acceptability of the product. When completed, outsiders should be able to understand both the “How” and the “Why” of the product validation. The plan should not be so complex that it cannot be understood by those outside of the testing group, but also not as simplistic as to not convey the importance of testing the system.

Open Source Agile Project Management Software
Engineering and software design projects of even modest complexity are rarely possible without committing tasks to a robust management tool that honors the principles of agile project management, with web-based applications now forming the bulk of commonly used applications. The technical requirements of sharing workflow have generally kept pace with current planning methodologies, and the emergence of open source licensed software is now offering capable and continuously updated tools fit for almost any purpose, including large scale software development. A popular place for small software businesses to look to help cut costs over the past few years has been to investigate the open source Agile project management software available at low or now cost.
What is Agile Project Management?
Agile project management as a method of planning and implementing a project collaboratively and flexibly with stakeholders from

Agile project management development.
within the organization, and often including external participants whose input is considered essential, so choosing software that can be used online is preferable. Amongst commercial project management products the market has become saturated , whilst open source is still limited to a handful of excellent tools, and a larger number of mediocre tools, though these shouldn’t necessarily be ignored if the project is smaller or doesn’t demand a full range of software features. Software really has revolutionized the management of rapid project development.
Why Should Open Source Agile Project Management Software Be Considered?
Any project initiated using agile methodology can only be as effective as the team and tools working within the project, and whilst team selection is often out of the hands of project coordinators, the choice of software for collaboratively working together often isn’t. Pricing for software varies considerably but for small to medium sized projects, such as application development, open source licensed software can be a good choice.
How Many Open Source Project Management Suites Are Available?
The number of open source applications is unlikely to grow much beyond the current crop owing to limited numbers of programmers willing to open source their work, however in the market for agile software development a number of tools originally developed for bug tracking are now adding project planning features and becoming very capable, with the possibility some may be useful in an engineering capacity with some modification. It should be noted also that the most popular open source agile project management software tools are robustly tested and frequently updated, and tend to be developed by a core team of programmers relying on user feedback for bug tracking.
Things to Watch Out for When Adopting Open Source Agile Management Tools
Be aware the terminology can differ significantly between applications, a simple example being the use of product/project/story to define the top level iteration, a situation often made worse when software has been adapted from another purpose. Developers do this to create a point of difference from their competition, but in most cases the management and planning of projects is no different. An interesting observation about open source agile project planning software is that given the rapid nature of agile any software which satisfies the requirements of the project leader needn’t be perfect. In most cases this fallacy will tend to prove onerous as the software is used for multiple projects, and a change in software will incur time costs as participants learn the structure of the software.
What Open Source Agile Project Management Software Packages Are Available?
Open source project planning software, which can be modified to suit the needs of the project and is usually free to use, has matured a great deal in the last few years, with notable products such as Agilefant, Agilo for Scrum, Clearworks, Express, IceScrum, eXPlainPMT, TaskJuggler and the recently forked XPlanner+ gaining wide use and traction.
How to Assess Open Source Agile Project Management Software
When assessing the best open source software for the need, choosing an application due to compatibility with scrum or Extreme Programming methodology when only a handful of tools exist is probably not the best criteria, instead an analysis of the features contained within each according to the project need should be considered essential.
Many of the open source products on the market are developed by a single programmer, they can be fairly sophisticated with sought after tools and reports such as burndown charts, but often have slower development cycles. In comparing software for ongoing use it may not be sufficient to compare features, a more comprehensive approach should consider development team, feature requests, third party add-ins, and overall uptake of the software.
All of the most commonly used open source agile project planning applications have at their top level a product (analogous to a project), terminology can vary, with some supporting only a single product and multiple milestones, and others supporting multiple products.
Iterative project management is of course the norm, but users will notice that burn down charts are not common to all, and in some applications this facility can be flawed and in need of more development. Charting is also uniformly different, with each application displaying iterations in their own developer specified way.
Being aware that a feature analysis will show boxes checked without giving details is a major frustration. If burndown charting is essential, a demonstration of the software before selection should be obligatory. Larger projects will of course require user permissions and the ability to lock users out of making changes to parts of the project, such as specification, whilst still allowing access for reference, and again, the implementation of administrative level permissions vs user level implementations in open source software can vary considerably.
Given the nature of agile project management it seems surprising that user levels would even be an issue, until we remember that many project planning applications have their roots in LAN-based software, and environments where tighter controls over staffing are possible. User controls do also allow for supervision of time allotted by staff and project contributors. Commercial software may rely on plugins to track time, and open source software may include this by default. The practice of separating out modules for later purchase isn’t common in open source software.
Iterative planning is core to all of the currently available project planning software, they all assist with resource and time management, and most offer charting or reporting to find and locate bottlenecks in larger more complex projects. Some take project planning a step beyond Gantt chart drawing into the domain of intuitively managing projects, and if updated as tasks are completed have the potential to come close to emulating JIT (just in time) strategies.
SDLC
SDLC (Systems Development Life-Cycle) is used in information systems, systems engineering, and software engineering as a process of creating new or altering existing systems. The SDLC can be thought of as a concept that lies beneath a number of software development methodologies currently employed throughout industry. From these, the framework to create, plan, and control an information system flows which is also known as the software development process.
Overview of the SDLC
SDLC describes a process used by engineers and analysts to create and deploy all aspects of an information system. These include defining requirements, validation, training, and emphasizing ownership of the system. Whenever SDLC is employed, the goal is to create a system that meets the primary stakeholder (or owner’s) expectations. These include but are not limited to delivering the project within time and cost constraints. Factors taken into account are system deployment, ease of use, and minimization of errors when connecting to legacy system components likely created by different software vendors. To help manage the inherent complexity when designing Enterprise software, there have been several SDLC based models created such as the Waterfall, Spiral, and Agile methodologies.
An SDLC-based software engineering model can range between agile to sequential methods depending on the suitability of the method to the project or task at hand. Each of the methodologies has different levels of risk and benefits to balance within the scope of the project requirements, budget, and delivery timelines. Models such as the Waterfall focus on exact and complete planning suitable for large projects where Scrum focuses on lightweight process that allow for rapid changes throughout the software development life-cycle. Regardless of the method chosen, the SDLC does not equal fully the project life cycle or PLC. The SDLC focuses on the product requirements while the PLC includes all activities of the project (think marketing, sales, other business matters, etc).
SDLC History
According to Elliot & Strachan & Radford (2004), the SDLC originated in the 1960’s to help crate large businesses systems. The information systems of the day focused on heavy data processing and mathematical routines. Since its inception, there have been several systems development frameworks based on some or all of SDLC such as the SSADM (Structured Systems Analysis and Design Method) created for the UK Government Office of Commerce in the 1980s. Since then, the majority of life cycle approaches to system development have been created to fix a deficiency identified in traditional SDLC phases specific to the task at hand.
SDLC Phases
The SDLC framework consists of a series of phases (or steps) that are intended to be followed in sequence by software or system designers and developers. In each phase of the System Development Life Cycle, the results of the previous phase are used. The labeling or titles of the phases may vary depending on the corporate or development environment, but include planning, analysis, design, and implementation. The waterfall model is the oldest SDLC-based model created where the output of each stage of the process explicitly becomes the input of the next stage.


The tenth phase occurs when the system is disposed of and the task performed is either eliminated or transferred to other systems.
Project Planning – Determines the project’s goals and results in a high-level view of the potential project. A feasibility study may be undertaken as part of this phase.
Requirements Definition – Results in the creation of well-defined functions from the defined project goals. Takes a look at the ultimate end-user’s needs for the information system. In the Sashimi waterfall method, feedback can be provided back to project planners for goal modification if required.
Systems Design: Project features and operations are describe in detail to include technical specifications, use of UML (when required/suitable), process diagrams, and even prototype creation along with other required documentation.
Implementation/Development: One of the most costly phases of the SDLC for information systems. Shortfalls in Systems design or requirements definition can become costly in the phase if not accomplished satisfactorily.
Integration and testing: A common phase that is under-funded by many corporate entities. In this phase all of the project components are integrated and tested for errors and interoperability in a special test environment.
Acceptance and Deployment: Software is deployed to the customer and starts accomplishing the desired work.
Maintenance: The maintenance phase of the SDLC can become a project in and of itself. Future software upgrades, bug fixes, and regular maintenance are addressed during this stage which may or may not have a well-defined end state.
Strengths and Weaknesses of the SDLC
The strict Waterfall model is not suitable for most systems development life cycles in today’s development environment. The underlying concepts; however, find their way into the latest “Rapid” development methods throughout industry with the pure SDLC practice lending itself better to a structured development environment. Many software developers have started taking on the best practices from the SDLC for the respective project as a guide to effective systems development.
Perceived Strengths of SDLC
- Increased Control
- Ability to monitor large projects
- Detailed steps
- Well defined user input and documentation
- Development and design standards
Perceived Weaknesses of SDLC
- Results in an increase in development time.
- Potential for increased development cost.
- Rigidity. Systems must be defined up-front with potentially limited user input.
- Project overruns can occur if errors occur in early stages of the project resulting in rework.

Sashimi Waterfall Model
To the uninformed, when you hear the term “Sashimi,” one first things of the Japanese style of overlapping slices of raw fish commonly enjoyed in Sushi establishments. In software engineering and development circles; however, Sashimi refers to the Japanese hardware development model that comes from Fuji-Xerox. The Sashimi Waterfall model is a variation on the classic waterfall in that it suggests a fair amount of overlap between the phases of the software development life cycle. The approach to development is considered suitable for projects that require a fair number of insights between each layer of development as the development life cycle progresses. Whereas in the classic or pure waterfall method, complete documentation is expected to be headed off to the team in charge of the next phase of development, in the Sashimi waterfall this documentation can change and encourages a continuity between the development phases.
Waterfall Model Refresher
For those not in school, or several quarters or semesters removed from their core software engineering course, the waterfall methodology is one of the best known and recognized techniques or methods for software development. The original waterfall grew out of the manufacturing and construction fields from the way that the phases of development flow downward (like a waterfall…). The classic waterfall method is considered best for projects that have clear requirements which will remain relatively static. Other cases

Similar to the popular Japanese Sashimi food, the Sashimi Waterfall model phases overlap in an iterative fashion.
that the classic waterfall may prove suitable are when management considers it helpful to have a rigid project structure with a set budget and well-defined timeline to adhere to. Sometimes the waterfall method is simply chosen based on the project manager’s personality. In the standard method, requirements will be defined, then analyzed, designed, and developed. Unfortunately, the waterfall does not take into account the fact that many projects are just not able to define all of the requirements prior to starting resulting in post-fact modifications to the process to be made to result in a successful project.
Sashimi Waterfall Model
The Sashimi waterfall model was originally created by Peter DeGrace. Other terms for the Sashimi include “The Waterfall Model with feedback” or the “Waterfall Model with overlapping phases.” An advantage of permitting overlap between the phases in the Sashimi is that issues can be discovered earlier in the software development process helping result in the minimization of re-work and a better final product. Engineers working on the design phase of a project may discover potential implementation problems prior to full production work beginning. Conversely, since the implementation phase begins prior to design going final, engineers may discover core design issues not intuitive prior to proceeding to development. The iterative method inherent within the Sashimi has been found to eliminate or reduce the cost associated with the classic Waterfall model. It is not without its own problems; however, as production teams or managers have to be careful to avoid iterating back to an earlier phase that has been closed or repeating too many iterations that result in a negative impact on successful completion of the product or bottom-line.
Sashimi Waterfall Model Process
The Sashimi Waterfall model process consists notionally of six phases: Requirements, Design and Architecture, Development and Coding, Quality Assurance and Software Testing, Implementation, and Maintenance and Support.
Sashimi Requirements Phase
Creating well-defined requirements is the most important and most challenging aspects of any software development project. At the end of the requirements phase; however, all of the project teams or representatives should have a good understanding of the task or project at hand. At this stage in the project, the requirements are written in a mix of plain English and some technical ease, but are not formally expressed in technical language (think of it as the bridge behind the “Idea Team or Guy/Gal” and the team who has to implement the idea(s). Some teams (specifically small companies or software teams) will also establish a timeline and budget in this phase, though many argue this should wait or at least be refined in the Design and Architecture phase once the true costs of the project can be suitable expressed.
Sashimi Design and Architecture Phase
During the Design and Architecture phase of the Sashimi, software architects will define the technical and/or functional definition of the project at hand. In this phase, UML (Unified Modeling Language) is popular for use for communication with other development teams (or individual developers) in addition to the documentation created for the project sponsor. As design work progresses, in the Sashimi model it is common for requirements to be refined with resulting changes in the architecture as the team’s work through the problem set. At the end of the phase, a solid plan will be defined for use in development and a working prototype may be created based on the project.
Sashimi Development and Coding
Development and coding will initiate while the design and architecture phase is still ongoing. Depending on the complexity of the project, it may not overlap until towards the later stages of design; however, the development and coding phase can be one of the most expensive and time consuming phases of any software development project. As a result, early feedback from those implementing the project is key to avoiding significant mistakes that can result in missed deadlines or costly rework.
Sashimi Quality Assurance and Software Testing
A significant advantage of the Sashimi Waterfall over classic implementations occurs during this phase. Since Sashimi is an iterative approach, testing occurs throughout the development process and finds issues much earlier in the development cycle. Developers are encouraged to test throughout the coding and development process as well as during the deployment process. Sometimes testing will be broken up to various phases of its own based on the complexity of the project. Some of the commonly found testing types in industry are:
Unit Testing
Regression Testing
Integration Testing
Performance Testing
Load Testing
Compatibility Testing
System Testing
Functional Testing
Human Factors (commonly ignored, and commonly the result of complaints for consumer-based projects).
Sashimi Implementation
The Implementation phase of the Sashimi may actually overlap with several of the previous phases depending on the nature of the project. Many teams will begin creating both technical and consumer support documentation in parallel with development and testing to better capture critical insights for the client. Additionally, this is the time when the software is delivered or installed and if required, client training occurs.
Sashimi Maintenance and Support
Providing maintenance and support is key to ensuring the client remains satisfied. Errors can occur for a number of reasons, technology can change, and support becomes an evolving and persistent process of its own. Many times the support phase becomes a separate project of its own for companies based on the client and scope of work completed.

Agile Software Development PART 2
Suitability of Agile Methods
Or agile methods are generally suitable depends on the chosen viewpoint. From a product perspective, agile methods as appropriate requirements are vague and changeable, they are less suitable for systems that must meet the critical requirements such as reliability and security, although there is no complete consensus about this. From an organizational perspective, the capability will be measured by three dimensions: culture, people and communication. In this context, a number of success factors indicated:
The culture of the organization should be open to discussion and negotiation
People must be trusted
Fewer but more competent people
Organizations must accept the decisions that developers take
Organizations must have an environment where rapid communication between team members is possible
The most important factor may be the project scope. Personal communication within a project team becomes more difficult as the size increases. Why agile methods suitable for small projects not exceeding 20 to 40 people.
Another problem is that assumptions at the start or an excessively rapid gathering of requirements before a deviation from the optimal solution can cause, especially if the customer has not expressed its wishes. Likewise, it is, given human nature, may well be a “dominant” one developer serious personal stamp on the design that does not have to correspond with the desired project outcome. Historically, developers can impose their solutions to customers, they convince the best and ultimately find that the solution does not work. In theory, this should reduce speed iterative nature, but it is assumed that when there is negative feedback. If there is no question of the derogation can increase rapidly.
This objection could be eliminated by the requirements to establish a separate phase (common in agile systems) and so to isolate the influence that developers have to exercise, or by continually developing customer involvement and any intermediate step to get tested. Problem is that customers will not have much time investing in them. It also complicates the QA (Quality Assurance) because no clear (SMART) test objectives that do not change from release to release.
Using DSDM as a ‘Suitability Filter
The suitability of individual agile methods to provide a deeper analysis is necessary. The DSDM method is therefore provided an example ‘suitability filter’. The Crystal family of methods provides a method by which criteria can be selected for a certain project. While the selection is based on project size, interest and priorities. Other agile methods do not provide such explicit tools to determine their suitability.
Certain methods such as DSDM and Feature Driven Development (FDD), is said to be suitable for all development, regardless of environmental factors (Abrahamsonn et al, 2003). at least in software.
A comparison of agile methods will reveal that they each have their own stage of a software development life cycle support. These individual characteristics of agile methods can obviously well used for their selection for a specific project.
Agile development has been extensively documented (see Agile development in the profession, below, as well as Beck, pg. 157, and Boehm and Turner pg. 55-57) to work well in small (<10 developers) co-located teams .
The question is not answered or agile development is suitable for the following scenarios:
Large-scale developments (> 20 developers), but there are proposals
Distributed development (non-co-located teams around the globe as well). Proposals have been made in Bridging the Distance and Using an Agile Software Process with Offshore Development
Projects of strategic and vital Hierarchical organizations.
Agile Development Successes
Interestingly, several major successes have been recorded at organizations such as BT Group, which possessed hundreds of developers located in the United Kingdom, Ireland and India, who worked together on projects in agile methodologies. Although undoubtedly be questioned about the suitability of agile methods for certain project types, size or geographical spread are apparently no insurmountable barriers to success.
Barry Boehm and Richard Turner suggest that risk analysis should apply to choose between adaptive (“agile”) and prescribing methods. According to them, both ends of the continuum, their very survival.
Agile Development rationale:
Large solution space
Senior Developer
Highly variable project requirements
Small number of developers
Culture that thrives in chaos
Basis for prescribing methods exist:
Cutting Specifications
Junior Developer
Predetermined project requirements
Many developers
Culture that order requires
Agile methods and customized method
Different terms are used to adapt method (‘method adaptation’) to denote such as “method tailoring ‘,’ method fragment adaptation ‘and’ situational method engineering. Customized method (“method tailoring) is defined as:

a process or skill which human factors medicine for developing a system for a specific project situation determined by concerted changes, and the dynamic interaction between contexts, intentions, and fragments of methods.
Almost all agile methods are potentially eligible for customization. Even the DSDM method is used for this purpose and has successfully tailored in a CMM context. As a feature that can distinguish between agile methods and traditional development methods, the latter being inflexible and prescriptive, situational adjustment is considered. Practical consequence is that agile methods project teams enables practices to suit individual project needs. Practices are concrete activities and products that are part of a methodical framework. At an extreme level, the philosophy behind the method, consisting of a number of””’principles’ to adapt (Aydin, 2004).
In the case of XP is the need for method adaptation explicit. One of the fundamental assumptions of XP is that no process exists that is applicable to each project, but that practice to the individual project needs to be adjusted. There are no experience reports in which XP practices are included. On the other hand several occasions report partial implementation of XP practices is done.
It can distinguish between static and dynamic method adaptation. The basic assumption behind static method adaptation is that the project context is given at the start and remains throughout the project. Resulting in a static definition of the project context. With such a definition, a choice can be made from fragments of structured methods. In contrast to dynamic method adaptation is assumed that the project context in development. This means that the project is highly unpredictable and subject to change. Advance can not be determined which fragments of methods will be applied. Project managers will therefore, during the execution of projects, to have to switch to method-fragments themselves their need to adapt or even invent new. (Aydin et al, 2005).
Agile Methods and Project Management
Agile methods differ substantially in how they project management overlap. Some methods are supplemented with guidelines for project management.
Measuring Agility
Although agility (agility) is seen as a means to an end, there are several proposals in order to quantify. Agility Index Measurements (AIM) measurement projects with a number of agility factors. The near-namesake Agility Measurement Index, set against developments five dimensions of a project (time, risk, novelty, effort, interaction). Other techniques are based on measurable goals. In a study that uses fuzzy mathematics proposes that project velocity as a measure of agility.
The practical applicability of these measures remains to be seen.
Overview of Agile methods
Some well known agile-development methods:
Extreme Programming (XP)
Scrum
Agile Modeling
Adaptive Software Development (ASD)
Crystal Clear and Other Crystal Methodologies
Dynamic Systems Development Method (DSDM)
Feature Driven Development (FDD)
Lean Software Development
Agile Unified Process (AUP)
Continuous integration
Other approaches:
Agile Documentation
Iconix Process
Microsoft Solutions Framework (MSF)
Agile Data Method
Database refactoring
When you need to learn more about the agile model for school or your project, I strongly recommend anyone to get this book: The Agile Samurai by Pragmatic Programmers. It’s only $22.83 and tought me way more then any other book (and I’ve read a lot on the subject!).
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900