The Study Detail

Ref.

Study Detail

[2]
The researchers measured the complexity, code cohesion, and code coupling from 2 open source games. The researchers compared these attributes between 2 versions that use design patterns and non-design patterns. Each quality attribute was measured through these following metrics:
(1) Complexity
  • Attribute Complexity (AC)
  • Weighted Methods per Class 1 (WMPC1)
  • Weighted Methods per Class 2 (WMPC2)
  • Line of Code (LOC)
  • Number of Classes (NOC)
    (2) Code Coupling
  • Coupling Factor (CF)
    (3) Code Cohesion
  • Lack of Cohesion of Methods (LCOM)
  • [ุ3]
    The reusability was measured through these following metrics:
  • Number of classes participating in the pattern (NOFParticipants)
  • Number of packages where the pattern is spread into (NOFpackageSet)
  • Reusability of Class Based Selection
  • Reusability of Pattern Based Selection
  • Reusability of Package Based Selection
  • Reusability of Multi-Package Based Selection
  • [4]The modularity was measured by analyzing the cross- cutting concern changes in each snapshot. Researchers compared the difference between two subsequent snap- shots. The relationship between crosscutting concerns and design patterns were measured by examining the kind, size, pattern change, and the number of added crosscutting methods calls.
    [5]
    This study has following steps:
  • Identified change sets from CVS/SVN repositories.
  • Identified design patterns from software releases.
  • Performed a fine-grained analysis of design pattern changes.
  • [ุ6] The design patterns were identified from 3 Java pro- grams including JHotDraw, ArgoUML, and Eclpise-JDT. The information of the modification on these 3 programs was maintained in CVS where the source code resided.
    [8] Researchers compared the throughput of two imple- mentations of the sparse matric-vector multiplication on GPUs. The first implementation used the PSBLAS library with design patterns. The second implementation used the standard PSBLAS library.
    [9]The modifiability was measured through the number of times that a class is modified over a period.
    [10] The design patterns were identified from 5 software sys- tems including C++ commercial system, Java commercial system (A), Java commercial system (B), Netbeans, and JRefactory. The information of the modification on these 5 systems was maintained in CVS where the source code resided.
    [11] The performance was measured through the execution time of programs implemented by 4 programming lan- guages (C++, Java, SmallTalk, Perl 5.0) on each design pattern.
    [12] The researchers compared the performance of software between 2 version that use design patterns and non- design patterns. The performance was measured through the execution time, CUP utilization, and memory usage. However, the researchers focused on the execution time due to the limit of space.
    [16]
    The 31 graduate students had to implement two software versions (Design pattern and non-design pattern). The students were asked to perform maintenance tasks on the same program. They had to submit the reports discussing the design problems that were solved by each design pattern. The researchers categorized the issues from the submitted reports. The code complexity were examined by using these following metrics:
  • LOC
  • Number of Classes (NOC)
  • Number of Attributes (NOA)
    The code coupling were examined by using these follow- ing metrics:
  • Coupling between Objects (CBO)
  • Lack of Cohesion in Methods (LCOM)
  • Methods Per Class
  • [18] The researchers measured the changes from CVS/SVN history log. They analyzed the change between two subsequent snapshots and identified which file had been changed by a developer.
    [20] The participants were asked to perform Java program- ming tasks. Each task had 2 implementations (Factory Method pattern and Constructor). The usability was measured through the completion time and correctness of answers.
    [22] The researchers measured the comprehensibility through the efficiency of participants answering the question about class diagrams. The class diagrams contained the designs employing design patterns and non-design pat- terns. The efficiency was the relationship between the number of correct answer and the spent time.
    [23] The researchers measured the correctness from the change. The number of change was recorded from the check-in command in the version control system.
    [24] The correctness was measured through the number of code modification in each software version. The mod- ification included a fault fix and enhancement a new version.
    [26] The maintainability was measured through the answers, completion time, and efficiency. The participants were divided into 2 groups. One group had design patterns with documents while another group had only design patterns. The participants were asked to answer the given questions. Such questions asked about the com- prehension of source code. The researchers performed 2 controlled experiments. The first experiment used UML diagrams as the document and the second experiment (replication) used the comments in source code as the document.
    [28] The performance was measured through the average of execution time over multiple runs. The maintainability was measured by designing a number of maintenance scenarios and calculating the maintenance complexity.
    [31] The 24 participants performed understanding tasks on UML Class diagrams by answering the given questions. This study used eye-trackers to generate the participants fixation data. The effort of each participant was calcu- lated by fixation data and the number of relevant classes in the class diagrams.
    [34] The complexity was measured through the Cyclomatic complexity. The researchers compared the complexity between 2 versions that use design patterns and non- design patterns.
    [35]
    The researchers measured the code cohesion and code coupling from these following metrics:
  • Coupling Between Objects (CBO)
  • Lack of Cohesion in Methods (LCOM5)
  • Weighted Method Count (WMC)
  • Hitz and Montazeri c connectivity of a class
  • Numbers of new, inherited, and overridden methods
  • Total number of methods
  • McCabes Cyclomatic Complexity Metric (CC)
  • Number of hierarchical levels below a class and class-to-leaf depth
  • [40] The flexibility was measured through the frequency and level of granularity of the validation of class invariants.
    [41]
    The expandability was measured through these follow- ing metrics:
  • Line of Code (LOC)
  • Number of Methods
  • Number of Classes (NOC)
    The researchers compared these metrics between design pattern implementation and non-design pattern imple- mentation.
  • [44] The performance was measured through the execution time of 30 Ada programs.
    [45]The subjects were separated into 2 groups including experience and inexperience groups. The subjects in the experience group performed the given tasks on the program using direct approach. On the other hand, the subjects in inexperience group performed given tasks on the program using design patterns. The maintainability was measured through the spent time and correctness of submitted programs.
    [46] Each participant was assigned to work on assignments of software changes. The correctness was measured through the number of faults in submitted tasks.
    [47] The 118 students performed 3 maintenance tasks. The maintainability was measured through the completion time and correctness of solution.
    [49] The researchers measured performance through the exe- cution time of the program that uses the observer pattern on different number of processors. The researchers also increased the number of observer objects and recorded the results. Each observer object was run as a separate thread that distributed to the processor.
    [50] The researchers collected the number of changes of classes playing roles in design patterns in a series of snapshots between the releases. They also collected the size (LOC) of classes playing roles in design patterns.
    [51]The participants worked on assigned tasks using four programs. Each program deployed different design pat- terns.
    [52] Each subject obtained 2 programs. One of those pro- grams had comment lines of design patterns and another one did not have the comment line of design patterns. Each subject had to perform 4 tasks. The maintainability was measured through the spent time and correctness of tasks.
    [53] Each subject had to perform a task with the docu- mentation of design pattern and another task without the documentation. The maintainability was measured through the completion time and correction of given tasks.
    [54] The researchers measured the performance of 4 pro- grams. Each program has 2 versions (design patterns and not-design pattern). The performance was measured through the execution time.
    [55]The cohesion was measured through the Lack of Cohe- sion metric (LCOF/LCOM). The code complexity was measured through the Cyclomatic Complexity (CC).
    [56] The researchers deployed J2EE applications into a single server and separated servers (Presentation, Business, and Database). The performance was measured through these followings:
  • Throughput The number of requests that the ap- plication is able to handle within a specified unit of time.
  • Response time The time a client has to wait for the response.
  • Reliability The number of correct served requests.
  • The number of requests
  • The processing time of which is longer than a threshold value
    The researcher compared the performance of each de- ployed design pattern.
  • [57] The researchers compared the performance on 3 different versions of the same program. Each version used differ- ent design patterns. Such a program was also developed into 2 platforms (J2EE and .NET). The applications were also deployed into either the same application server or separated application server.
    [58] The subjects were asked to perform the given tasks on the source code. There were 2 groups of subjects. The first group performed tasks on the source code with the document of design patterns. On the other hand, the second group performed tasks on the source with- out the document. The maintainability was measured through the effort and efficiency of subjects. The effort was measured by the completion time. The efficiency was measured by the correctness and completeness of 14 open questions.
    [59]
    The modularity was measured through coupling metrics. The researchers identified 3 properties of the coupling relationship as followings:
  • Strength of coupling
  • Scope of coupling
  • Direction of coupling
  • [60] Each team consisted of two members. A member of the team studied the requirements with software design and explained the design to another member. Both of mem- bers performed two maintenance tasks together. The researchers measured the communication ability through the interaction of each other while performing the tasks. Each team had to perform the tasks before and after they took the design pattern course.
    [61]
    The defects information was collected by followings:
  • Used a developed tool to extract the design patterns in the source code.
  • The defects in the source code were discovered by Defect Tracking System.
  • [62]
    The correctness was measured through the evaluation of each given task solution. The correctness was defined as:
  • Requirement misunderstood The solution is totally useless.
  • Wrong answer The requirements were understood but the solution did not work.
  • Right idea A reasonable solution, bud did not work.
  • Almost correct The solution compiled and ran but did not give exactly the correct answer.
  • Correct The solution compiled, ran and produced correct output.
  • [63]The performance was measured through the execution time on the assembly routine. The researchers compared the performance between design pattern implementation and non-design pattern implementation.

    Last Updated on September 12, 2012.