From 69b2d6a1d99ea4f33df434543a8d25feb6407337 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Sun, 24 Mar 2019 01:29:08 +0800 Subject: [PATCH 01/19] Add boilerplate for technical report --- technical-reports/Performance-Testing.md | 29 ++++++++++++++++++++++++ 1 file changed, 29 insertions(+) create mode 100644 technical-reports/Performance-Testing.md diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md new file mode 100644 index 0000000..c619386 --- /dev/null +++ b/technical-reports/Performance-Testing.md @@ -0,0 +1,29 @@ +# Performance Testing of TEAMMATES + +This report gives a detailed explanation of the profiling operations performed on TEAMMATES. It gives an outline of the +problem and describes our proposed solution. + +* [Introduction](#Introduction) +* [Problem](#Problem) +* [A brief overview of the Proposed Solution](#Overview-of-Solution) +* [Tools considered for Performance Testing](#Tools-considered-for-Performance-Testing) +* [Reasons for using JMeter](#Reasons-for-using-JMeter) +* [Design of the Workflow](#Design-of-the-workflow) +* [Findings](#Findings) +* [Future Improvements](#Future-improvements) + +## Introduction + +## Problem + +## Overview of Solution + +## Tools considered for Performance Testing + +## Reasons for using JMeter + +## Design of the workflow + +## Findings + +## Future Improvements From b6561a4b75ea75804c0df89224f5e6db521d0fe4 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Sun, 24 Mar 2019 01:53:04 +0800 Subject: [PATCH 02/19] Reorder --- technical-reports/Performance-Testing.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index c619386..cec3aeb 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -1,8 +1,5 @@ # Performance Testing of TEAMMATES -This report gives a detailed explanation of the profiling operations performed on TEAMMATES. It gives an outline of the -problem and describes our proposed solution. - * [Introduction](#Introduction) * [Problem](#Problem) * [A brief overview of the Proposed Solution](#Overview-of-Solution) @@ -14,6 +11,9 @@ problem and describes our proposed solution. ## Introduction +This report gives a detailed explanation of the profiling operations performed on TEAMMATES. It gives an outline of the +problem and describes our proposed solution. + ## Problem ## Overview of Solution From 7d3569d202c1c1d6df82663bf0df72a68cb4a54e Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Sun, 24 Mar 2019 02:03:47 +0800 Subject: [PATCH 03/19] Rephrase --- technical-reports/Performance-Testing.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index cec3aeb..92676a5 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -1,18 +1,18 @@ -# Performance Testing of TEAMMATES +# Continuous Profiling in TEAMMATES * [Introduction](#Introduction) * [Problem](#Problem) -* [A brief overview of the Proposed Solution](#Overview-of-Solution) +* [Overview of the Proposed Solution](#Overview-of-Solution) * [Tools considered for Performance Testing](#Tools-considered-for-Performance-Testing) * [Reasons for using JMeter](#Reasons-for-using-JMeter) -* [Design of the Workflow](#Design-of-the-workflow) -* [Findings](#Findings) -* [Future Improvements](#Future-improvements) +* [Current implementation of the solution](#current-implementation-of-the-solution) +* [Findings and Recommendations](#findings-and-recommendations) +* [Future Work](#Future-work) ## Introduction -This report gives a detailed explanation of the profiling operations performed on TEAMMATES. It gives an outline of the -problem and describes our proposed solution. +This report gives a brief overview of the profiling operations performed on TEAMMATES. It gives an outline of the +problem and describes the reasons behind our proposed solution. ## Problem @@ -22,8 +22,8 @@ problem and describes our proposed solution. ## Reasons for using JMeter -## Design of the workflow +## Current implementation of the solution -## Findings +## Findings and Recommendations -## Future Improvements +## Future Work From 5aae5344dd8a03cf9fe1faca95de553b0bc7ec86 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Sun, 24 Mar 2019 02:13:57 +0800 Subject: [PATCH 04/19] Add authors --- technical-reports/Performance-Testing.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 92676a5..e396308 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -1,5 +1,7 @@ # Continuous Profiling in TEAMMATES +Authors: [Ronak Lakhotia](https://github.com/RonakLakhotia), [Amrut Prabhu](https://github.com/amrut-prabhu) and [Jacob Li PengCheng](https://github.com/jacoblipech) + * [Introduction](#Introduction) * [Problem](#Problem) * [Overview of the Proposed Solution](#Overview-of-Solution) From 7f1d72f5f3286173df5d7dbb67a0c0522444805f Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Sun, 24 Mar 2019 02:56:21 +0800 Subject: [PATCH 05/19] Add problem section --- technical-reports/Performance-Testing.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index e396308..56dfb23 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -18,6 +18,13 @@ problem and describes the reasons behind our proposed solution. ## Problem +TEAMMATES is one of the biggest student projects in the open source community. Currently, TEAMMATES boasts of a community comprising over 450 developers and a codebase +of nearly 130LoC. Maintaining such a project demands high quality standards to ensure long term survival. This means, +continuously monitoring code health and product performance. As the number of developers and user base continue to grow, +we need to ensure optimal performance at all times. In this report, we propose a viable solution to perform regression +tests that will help developers keep a track of the potential bottlenecks and areas of optimizations. This will help +boost the performance as the product evolves over time. + ## Overview of Solution ## Tools considered for Performance Testing From b0e38c0a9b886fa172a3876ca485a68b9367f824 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Mon, 15 Apr 2019 18:37:56 +0800 Subject: [PATCH 06/19] Update --- technical-reports/Performance-Testing.md | 34 ++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 56dfb23..64425dc 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -27,12 +27,46 @@ boost the performance as the product evolves over time. ## Overview of Solution +The idea behind L&P tests is to simplify the process of understanding production performance and enable the developers to address bottlenecks before they become genuine production issues. +Implementing these tests involves a few key points:- + +* A tool/software to help perform these tests +* A database to store the data of the profiler +* A way of generating reports to help developers understand the metrics + +After carefully considering various tools, we decided to use [Apache JMeter](https://jmeter.apache.org/) to help run the performance tests. +In this report we will discuss the reasons behind why we chose JMeter and a more detailed description of our implementation. + + ## Tools considered for Performance Testing ## Reasons for using JMeter + + ## Current implementation of the solution +JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) +and the [JMeter Java API](https://jmeter.apache.org/api/index.html). We explored both possibilities but ended up using the JMeter Java API. + +The JMeter-gradle-plugin along with the JMX files had a few issues. Firstly, it is not well maintained and does not have easy-to-find documentation. The existing resources are outdated and are not in sync with +the latest version of JMeter. On the other hand, we found the JMeter Java API to fit well with TEAMMATES' backend testing framework. +It is also easier to integrate into the CI pipeline with a TestNG gradle task. The entire process is more coherent while allowing the same level of configuration. + +A brief description of the process:- + +* Create a test json and csv file for the test. + * Since the data files are large (at least 5 times the size of `*UiTest.json` files with at least 100s of entities), they are deleted and not committed to the repo. This way, we can easily change the scale of the test without having to rewrite the code for generating the data. + +* Create the JMeter test and run. + * Each test configures the test plan, similar to how it is done in the GUI. We also considered using a Builder pattern, but it didn’t make complete sense to do so (since we can’t say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. + +* We then display summarised results for that endpoint and also determine the failure threshold criteria. + +A more detailed overview of the tasks performed can be seen in the [Continuous Profiling Project page](https://github.com/teammates/teammates/projects/7). + ## Findings and Recommendations ## Future Work + +We need to fine-tune the L&P test parameters and set suitable thresholds for failure. These should align with the goals of the application. From b6a67dd862379059da12af9015df964386324bec Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Tue, 16 Apr 2019 02:06:28 +0800 Subject: [PATCH 07/19] Update changes --- technical-reports/Performance-Testing.md | 28 +++++++++++++----------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 64425dc..244c880 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -1,4 +1,4 @@ -# Continuous Profiling in TEAMMATES +# Performance Testing in TEAMMATES Authors: [Ronak Lakhotia](https://github.com/RonakLakhotia), [Amrut Prabhu](https://github.com/amrut-prabhu) and [Jacob Li PengCheng](https://github.com/jacoblipech) @@ -18,8 +18,8 @@ problem and describes the reasons behind our proposed solution. ## Problem -TEAMMATES is one of the biggest student projects in the open source community. Currently, TEAMMATES boasts of a community comprising over 450 developers and a codebase -of nearly 130LoC. Maintaining such a project demands high quality standards to ensure long term survival. This means, +TEAMMATES is one of the biggest student projects in the open source community. Currently, TEAMMATES boasts of a community comprising over 450 developers and a codebase of nearly 130LoC (as of 15th April, 2019). +Maintaining such a project demands high quality standards to ensure long term survival. This means, continuously monitoring code health and product performance. As the number of developers and user base continue to grow, we need to ensure optimal performance at all times. In this report, we propose a viable solution to perform regression tests that will help developers keep a track of the potential bottlenecks and areas of optimizations. This will help @@ -28,13 +28,13 @@ boost the performance as the product evolves over time. ## Overview of Solution The idea behind L&P tests is to simplify the process of understanding production performance and enable the developers to address bottlenecks before they become genuine production issues. -Implementing these tests involves a few key points:- +Implementing these tests involves a few key points: -* A tool/software to help perform these tests +* A tool/software to help performing these tests * A database to store the data of the profiler * A way of generating reports to help developers understand the metrics -After carefully considering various tools, we decided to use [Apache JMeter](https://jmeter.apache.org/) to help run the performance tests. +After carefully considering various tools, we decided to use [Apache JMeter](https://jmeter.apache.org/) to help running the performance tests. In this report we will discuss the reasons behind why we chose JMeter and a more detailed description of our implementation. @@ -47,21 +47,23 @@ In this report we will discuss the reasons behind why we chose JMeter and a more ## Current implementation of the solution JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) -and the [JMeter Java API](https://jmeter.apache.org/api/index.html). We explored both possibilities but ended up using the JMeter Java API. +and the [JMeter Java API](https://jmeter.apache.org/api/index.html). We explored both possibilities but ended up using the JMeter Java API. Some key observations we made: -The JMeter-gradle-plugin along with the JMX files had a few issues. Firstly, it is not well maintained and does not have easy-to-find documentation. The existing resources are outdated and are not in sync with -the latest version of JMeter. On the other hand, we found the JMeter Java API to fit well with TEAMMATES' backend testing framework. -It is also easier to integrate into the CI pipeline with a TestNG gradle task. The entire process is more coherent while allowing the same level of configuration. +* The jmeter-gradle-plugin is not well maintained and does not have easy-to-find documentation. +* The existing resources are outdated and are not in sync with the latest version of JMeter. +* The JMeter Java API, on the other hand, fits well with TEAMMATES' backend testing framework. +* It is also easier to integrate it into the CI pipeline with a TestNG gradle task. +* The entire process is more coherent while allowing the same level of configuration. -A brief description of the process:- +A brief description of the process: * Create a test json and csv file for the test. - * Since the data files are large (at least 5 times the size of `*UiTest.json` files with at least 100s of entities), they are deleted and not committed to the repo. This way, we can easily change the scale of the test without having to rewrite the code for generating the data. + * Since the data files are large (at least 5 times the size of test data used for E2E tests), they are not committed to the repo. This way, we can easily change the scale of the test without having to rewrite the code for generating the data. * Create the JMeter test and run. * Each test configures the test plan, similar to how it is done in the GUI. We also considered using a Builder pattern, but it didn’t make complete sense to do so (since we can’t say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. -* We then display summarised results for that endpoint and also determine the failure threshold criteria. +* Determine the failure threshold criteria and display the summarised results for that endpoint. A more detailed overview of the tasks performed can be seen in the [Continuous Profiling Project page](https://github.com/teammates/teammates/projects/7). From 55c6b4a8f4c75834321fa7e5e581cb90b2f298a2 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Tue, 16 Apr 2019 02:16:29 +0800 Subject: [PATCH 08/19] Fix line breaks --- technical-reports/Performance-Testing.md | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 244c880..ee2edd4 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -13,17 +13,15 @@ Authors: [Ronak Lakhotia](https://github.com/RonakLakhotia), [Amrut Prabhu](http ## Introduction -This report gives a brief overview of the profiling operations performed on TEAMMATES. It gives an outline of the -problem and describes the reasons behind our proposed solution. +This report gives a brief overview of the profiling operations performed on TEAMMATES. It gives an outline of the problem and describes the reasons behind our proposed solution. ## Problem TEAMMATES is one of the biggest student projects in the open source community. Currently, TEAMMATES boasts of a community comprising over 450 developers and a codebase of nearly 130LoC (as of 15th April, 2019). -Maintaining such a project demands high quality standards to ensure long term survival. This means, -continuously monitoring code health and product performance. As the number of developers and user base continue to grow, -we need to ensure optimal performance at all times. In this report, we propose a viable solution to perform regression -tests that will help developers keep a track of the potential bottlenecks and areas of optimizations. This will help -boost the performance as the product evolves over time. +Maintaining such a project demands high quality standards to ensure long term survival. +This means, continuously monitoring code health and product performance. As the number of developers and user base continue to grow, we need to ensure optimal performance at all times. +In this report, we propose a viable solution to perform regression tests that will help developers keep a track of the potential bottlenecks and areas of optimizations. +This will help boost the performance as the product evolves over time. ## Overview of Solution @@ -37,7 +35,6 @@ Implementing these tests involves a few key points: After carefully considering various tools, we decided to use [Apache JMeter](https://jmeter.apache.org/) to help running the performance tests. In this report we will discuss the reasons behind why we chose JMeter and a more detailed description of our implementation. - ## Tools considered for Performance Testing ## Reasons for using JMeter @@ -46,8 +43,8 @@ In this report we will discuss the reasons behind why we chose JMeter and a more ## Current implementation of the solution -JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) -and the [JMeter Java API](https://jmeter.apache.org/api/index.html). We explored both possibilities but ended up using the JMeter Java API. Some key observations we made: +JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) and the [JMeter Java API](https://jmeter.apache.org/api/index.html). +We explored both possibilities but ended up using the JMeter Java API. Some key observations we made: * The jmeter-gradle-plugin is not well maintained and does not have easy-to-find documentation. * The existing resources are outdated and are not in sync with the latest version of JMeter. From 4201c7449f0cf548c0a815f49621b57ad32a6f2e Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Tue, 16 Apr 2019 21:58:39 +0800 Subject: [PATCH 09/19] Update tools section and make fixes --- technical-reports/Performance-Testing.md | 31 +++++++++++++++++++----- 1 file changed, 25 insertions(+), 6 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index ee2edd4..18dc921 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -20,30 +20,48 @@ This report gives a brief overview of the profiling operations performed on TEAM TEAMMATES is one of the biggest student projects in the open source community. Currently, TEAMMATES boasts of a community comprising over 450 developers and a codebase of nearly 130LoC (as of 15th April, 2019). Maintaining such a project demands high quality standards to ensure long term survival. This means, continuously monitoring code health and product performance. As the number of developers and user base continue to grow, we need to ensure optimal performance at all times. -In this report, we propose a viable solution to perform regression tests that will help developers keep a track of the potential bottlenecks and areas of optimizations. +In this report, we propose a viable solution that will help developers monitor the potential bottlenecks and areas of optimizations. This will help boost the performance as the product evolves over time. ## Overview of Solution -The idea behind L&P tests is to simplify the process of understanding production performance and enable the developers to address bottlenecks before they become genuine production issues. +The idea behind L&P tests is to simplify the process of understanding production performance. This is done by simulating a heavy load on a server by creating a large number of virtual concurrent users to web server. Implementing these tests involves a few key points: -* A tool/software to help performing these tests -* A database to store the data of the profiler -* A way of generating reports to help developers understand the metrics +* A tool/software to help performing these tests. +* A method to simulate a large number of users and send requests to a target server. +* A way of generating reports to help developers understand the metrics. After carefully considering various tools, we decided to use [Apache JMeter](https://jmeter.apache.org/) to help running the performance tests. In this report we will discuss the reasons behind why we chose JMeter and a more detailed description of our implementation. ## Tools considered for Performance Testing +Some of the tools that we considered before deciding on JMeter were: + +* [Gatling](https://gatling.io/) +* [LoadRunner](https://www.guru99.com/introduction-to-hp-loadrunner-and-its-archtecture.html) +* [BlazeMeter](https://www.blazemeter.com/) + ## Reasons for using JMeter +One of the main reasons we use JMeter over the other tools was the **extensive documentation** we found online. There are a number of resources to help you get started. Some of which we have listed below: + +* [JMeter Tutorial for beginners](https://www.guru99.com/jmeter-tutorials.html) +* [How to use JMeter](https://www.blazemeter.com/blog/how-use-jmeter-assertions-three-easy-steps) +* [The official website](https://jmeter.apache.org/usermanual/build-web-test-plan.html) also offers a good documentation on how to get started. + +Some other reasons why we found JMeter to be useful: + +**Open Source** - JMeter is an open source software. This means that it can be downloaded free of cost. The developer can use its source code, can modify and customize it as per their requirement. + +**Ease of Integration** - It is easier to integrate JMeter into the project because of the [JMeter Java API](https://jmeter.apache.org/api/index.html). There is also a [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) if you want to make it a part of your build process. +**Roust Reporting** - JMeter can generate the effective reporting. The test result can be visualized by using Graph, Chart, and Tree View. JMeter supports different formats for reporting like text, XML, HTML and JSON. ## Current implementation of the solution -JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) and the [JMeter Java API](https://jmeter.apache.org/api/index.html). +JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like jmeter-gradle plugin and the JMeter Java API. We explored both possibilities but ended up using the JMeter Java API. Some key observations we made: * The jmeter-gradle-plugin is not well maintained and does not have easy-to-find documentation. @@ -69,3 +87,4 @@ A more detailed overview of the tasks performed can be seen in the [Continuous P ## Future Work We need to fine-tune the L&P test parameters and set suitable thresholds for failure. These should align with the goals of the application. +Currently login takes a lot of time (compared to student profile, at least). So, we can explore the idea of using a delay after login, and testing the endpoint after that. From 36457f0aeab2afedb211bf6cdc637df92a5fb2ed Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Wed, 17 Apr 2019 01:01:58 +0800 Subject: [PATCH 10/19] Language tweaks --- technical-reports/Performance-Testing.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 18dc921..cda86e0 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -13,19 +13,18 @@ Authors: [Ronak Lakhotia](https://github.com/RonakLakhotia), [Amrut Prabhu](http ## Introduction -This report gives a brief overview of the profiling operations performed on TEAMMATES. It gives an outline of the problem and describes the reasons behind our proposed solution. +This report gives a brief overview of the profiling operations performed on TEAMMATES. In particular, it includes a detailed discussion of the Load and Performance (L&P) testing framework and justification for our solution. ## Problem -TEAMMATES is one of the biggest student projects in the open source community. Currently, TEAMMATES boasts of a community comprising over 450 developers and a codebase of nearly 130LoC (as of 15th April, 2019). +TEAMMATES is one of the biggest student projects in the open source community. As of April 2019, TEAMMATES boasts a developer community of over 450 contributors and a codebase with ~130 KLoC. Maintaining such a project demands high quality standards to ensure long term survival. This means, continuously monitoring code health and product performance. As the number of developers and user base continue to grow, we need to ensure optimal performance at all times. -In this report, we propose a viable solution that will help developers monitor the potential bottlenecks and areas of optimizations. -This will help boost the performance as the product evolves over time. +To do so, we need to be able to identify performance issue-prone operations with a quantitative measure so that they can be rectified. ## Overview of Solution -The idea behind L&P tests is to simplify the process of understanding production performance. This is done by simulating a heavy load on a server by creating a large number of virtual concurrent users to web server. +The idea behind L&P tests is to simplify the process of understanding production performance. The idea behind L&P tests is to simplify the process of understanding production performance and enable the developers to address bottlenecks before they become genuine production issues. Implementing these tests involves a few key points: * A tool/software to help performing these tests. @@ -80,6 +79,8 @@ A brief description of the process: * Determine the failure threshold criteria and display the summarised results for that endpoint. +* Delete the entities and data files created. + A more detailed overview of the tasks performed can be seen in the [Continuous Profiling Project page](https://github.com/teammates/teammates/projects/7). ## Findings and Recommendations From 141a8cd5b043005a57cb6cc2cc402cdc47460db6 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Wed, 17 Apr 2019 19:35:59 +0800 Subject: [PATCH 11/19] Sync with google docs --- technical-reports/Performance-Testing.md | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index cda86e0..4cf28ce 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -38,8 +38,8 @@ In this report we will discuss the reasons behind why we chose JMeter and a more Some of the tools that we considered before deciding on JMeter were: -* [Gatling](https://gatling.io/) -* [LoadRunner](https://www.guru99.com/introduction-to-hp-loadrunner-and-its-archtecture.html) +* [Gatling](https://gatling.io/) - It higher barrier to entry for potential contributors. +* [LoadRunner](https://www.guru99.com/introduction-to-hp-loadrunner-and-its-archtecture.html) - This is a license tool and cost of using it is high. LoadRunner has a lot of protocols, such as HTTP, Oracle and SAP.WEB., but we don’t need this. * [BlazeMeter](https://www.blazemeter.com/) ## Reasons for using JMeter @@ -85,7 +85,22 @@ A more detailed overview of the tasks performed can be seen in the [Continuous P ## Findings and Recommendations +Currently the performance issue-prone operations in TEAMMATES are as follows: + +* Instructor page: Enrolling students + +* Instructor page: Viewing feedback session results by instructor + +* Student page: Submitting a feedback session when the number of questions is large + +Our aim is to test these endpoints extensively and get metrics such as latency, throughput and other relevant results. +This is still a work-in-progress as we are yet to consolidate the results but our goal is to generate reports that will help the developers understand the performance of each endpoint. + ## Future Work We need to fine-tune the L&P test parameters and set suitable thresholds for failure. These should align with the goals of the application. Currently login takes a lot of time (compared to student profile, at least). So, we can explore the idea of using a delay after login, and testing the endpoint after that. + +We can also explore elements like Timers and JSON Extractors. By synchronizing, timer JMeter spike Testing can be achieved. +Synchronizing timer blocks thread until a specific amount of threads has been blocked and then release them all together thus creating large instantaneous load. + From 1915f02b235fecbd50d99bce8c5f8e29dc8e2277 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Fri, 19 Apr 2019 01:11:41 +0800 Subject: [PATCH 12/19] Fix typos and language errors --- technical-reports/Performance-Testing.md | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 4cf28ce..8a05f0e 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -24,7 +24,7 @@ To do so, we need to be able to identify performance issue-prone operations with ## Overview of Solution -The idea behind L&P tests is to simplify the process of understanding production performance. The idea behind L&P tests is to simplify the process of understanding production performance and enable the developers to address bottlenecks before they become genuine production issues. +The idea behind L&P tests is to simplify the process of understanding production performance and enable the developers to address bottlenecks before they become genuine production issues. Implementing these tests involves a few key points: * A tool/software to help performing these tests. @@ -52,11 +52,11 @@ One of the main reasons we use JMeter over the other tools was the **extensive d Some other reasons why we found JMeter to be useful: -**Open Source** - JMeter is an open source software. This means that it can be downloaded free of cost. The developer can use its source code, can modify and customize it as per their requirement. +**Open Source** - JMeter is an open source software. This means that it can be downloaded free of cost. The developers can use its source code, can modify and customize it as per their requirement. **Ease of Integration** - It is easier to integrate JMeter into the project because of the [JMeter Java API](https://jmeter.apache.org/api/index.html). There is also a [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) if you want to make it a part of your build process. -**Roust Reporting** - JMeter can generate the effective reporting. The test result can be visualized by using Graph, Chart, and Tree View. JMeter supports different formats for reporting like text, XML, HTML and JSON. +**Roust Reporting** - JMeter can generate effective reporting. The test results can be visualized by using Graph, Chart, and Tree View. JMeter supports different formats for reporting like text, XML, HTML and JSON. ## Current implementation of the solution @@ -101,6 +101,4 @@ This is still a work-in-progress as we are yet to consolidate the results but ou We need to fine-tune the L&P test parameters and set suitable thresholds for failure. These should align with the goals of the application. Currently login takes a lot of time (compared to student profile, at least). So, we can explore the idea of using a delay after login, and testing the endpoint after that. -We can also explore elements like Timers and JSON Extractors. By synchronizing, timer JMeter spike Testing can be achieved. -Synchronizing timer blocks thread until a specific amount of threads has been blocked and then release them all together thus creating large instantaneous load. - +We can also explore elements like Timers and JSON Extractors. By synchronizing, Timer JMeter **Spike Testing** can be achieved. From e050dfa3019f65961358d03b6e267424c4252ebd Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Sun, 21 Apr 2019 18:12:45 +0800 Subject: [PATCH 13/19] Update based on review --- technical-reports/Performance-Testing.md | 43 ++++++++++++------------ 1 file changed, 22 insertions(+), 21 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 8a05f0e..38c80bb 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -13,14 +13,14 @@ Authors: [Ronak Lakhotia](https://github.com/RonakLakhotia), [Amrut Prabhu](http ## Introduction -This report gives a brief overview of the profiling operations performed on TEAMMATES. In particular, it includes a detailed discussion of the Load and Performance (L&P) testing framework and justification for our solution. +This report gives a brief overview of the profiling operations performed on TEAMMATES. In particular, it includes a detailed discussion of the Load and Performance (L&P) testing framework and the process we followed. ## Problem -TEAMMATES is one of the biggest student projects in the open source community. As of April 2019, TEAMMATES boasts a developer community of over 450 contributors and a codebase with ~130 KLoC. +TEAMMATES is one of the biggest student projects in the open source community. As of April 2019, TEAMMATES boasts a codebase with ~130 KLoC. More importantly, it has over 350,000 users. Maintaining such a project demands high quality standards to ensure long term survival. -This means, continuously monitoring code health and product performance. As the number of developers and user base continue to grow, we need to ensure optimal performance at all times. -To do so, we need to be able to identify performance issue-prone operations with a quantitative measure so that they can be rectified. +There are many factors that can cause decelerating performance of the production software like increased number of database records, increased number of simultaneous requests to the server, and a larger number of users accessing the system at any given point. +This means, continuously monitoring code health and product performance. To do so, we need to be able to identify performance issue-prone operations with a quantitative measure so that they can be rectified. ## Overview of Solution @@ -38,46 +38,47 @@ In this report we will discuss the reasons behind why we chose JMeter and a more Some of the tools that we considered before deciding on JMeter were: -* [Gatling](https://gatling.io/) - It higher barrier to entry for potential contributors. -* [LoadRunner](https://www.guru99.com/introduction-to-hp-loadrunner-and-its-archtecture.html) - This is a license tool and cost of using it is high. LoadRunner has a lot of protocols, such as HTTP, Oracle and SAP.WEB., but we don’t need this. -* [BlazeMeter](https://www.blazemeter.com/) +* [Gatling](https://gatling.io/) - It has higher barrier to entry for potential contributors. +* [LoadRunner](https://www.guru99.com/introduction-to-hp-loadrunner-and-its-archtecture.html) - This is a licensed tool and cost of using it is high. LoadRunner has a lot of protocols, such as HTTP, Oracle and SAP.WEB., but we do not need those. +* [BlazeMeter](https://www.blazemeter.com/) - The reports generated by BlazeMeter are basic as compared to JMeter. Also, the free version of the tool has limited functionalities. ## Reasons for using JMeter -One of the main reasons we use JMeter over the other tools was the **extensive documentation** we found online. There are a number of resources to help you get started. Some of which we have listed below: +One of the main reasons we use JMeter over the other tools was the **extensive documentation** we found online. There are a number of resources to help one to get started. Some of which we have listed below: * [JMeter Tutorial for beginners](https://www.guru99.com/jmeter-tutorials.html) * [How to use JMeter](https://www.blazemeter.com/blog/how-use-jmeter-assertions-three-easy-steps) -* [The official website](https://jmeter.apache.org/usermanual/build-web-test-plan.html) also offers a good documentation on how to get started. +* [The official website](https://jmeter.apache.org/usermanual/build-web-test-plan.html) Some other reasons why we found JMeter to be useful: **Open Source** - JMeter is an open source software. This means that it can be downloaded free of cost. The developers can use its source code, can modify and customize it as per their requirement. -**Ease of Integration** - It is easier to integrate JMeter into the project because of the [JMeter Java API](https://jmeter.apache.org/api/index.html). There is also a [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) if you want to make it a part of your build process. +**Ease of Integration** - It is easier to integrate JMeter into the project because of the [JMeter Java API](https://jmeter.apache.org/api/index.html). -**Roust Reporting** - JMeter can generate effective reporting. The test results can be visualized by using Graph, Chart, and Tree View. JMeter supports different formats for reporting like text, XML, HTML and JSON. +**Robust Reporting** - JMeter can generate effective reporting. The test results can be visualized by using Graph, Chart, and Tree View. JMeter supports different formats for reporting like text, XML, HTML and JSON. ## Current implementation of the solution JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like jmeter-gradle plugin and the JMeter Java API. We explored both possibilities but ended up using the JMeter Java API. Some key observations we made: -* The jmeter-gradle-plugin is not well maintained and does not have easy-to-find documentation. -* The existing resources are outdated and are not in sync with the latest version of JMeter. -* The JMeter Java API, on the other hand, fits well with TEAMMATES' backend testing framework. -* It is also easier to integrate it into the CI pipeline with a TestNG gradle task. -* The entire process is more coherent while allowing the same level of configuration. +* The jmeter-gradle-plugin is not well maintained and does not have easy-to-find documentation. The existing resources are outdated and are not in sync with the latest version of JMeter. +* The JMeter Java API, on the other hand, fits well with TEAMMATES' backend testing framework. It is also easier to integrate it into the CI pipeline with a TestNG gradle task. +* With the JMeter Java API, the entire process is more coherent while allowing the same level of configuration. A brief description of the process: -* Create a test json and csv file for the test. +* Determine the failure threshold criteria according to which endpoint is being tested. + +* Create a test JSON and CSV file for the test. + * The purpose of the JSON and CSV files are to store data that is needed to test the endpoints. With the data stored in these files we can parameterize HTTP requests and simulate multiple users accessing the endpoint being tested. * Since the data files are large (at least 5 times the size of test data used for E2E tests), they are not committed to the repo. This way, we can easily change the scale of the test without having to rewrite the code for generating the data. * Create the JMeter test and run. - * Each test configures the test plan, similar to how it is done in the GUI. We also considered using a Builder pattern, but it didn’t make complete sense to do so (since we can’t say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. + * Each test configures the test plan, similar to how it is done in the GUI. We also considered using a Builder pattern, but it did not make complete sense to do so (since we can not say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. -* Determine the failure threshold criteria and display the summarised results for that endpoint. +* Display the summarised results for that endpoint. * Delete the entities and data files created. @@ -85,7 +86,7 @@ A more detailed overview of the tasks performed can be seen in the [Continuous P ## Findings and Recommendations -Currently the performance issue-prone operations in TEAMMATES are as follows: +Currently, the performance issue-prone operations in TEAMMATES are as follows: * Instructor page: Enrolling students @@ -99,6 +100,6 @@ This is still a work-in-progress as we are yet to consolidate the results but ou ## Future Work We need to fine-tune the L&P test parameters and set suitable thresholds for failure. These should align with the goals of the application. -Currently login takes a lot of time (compared to student profile, at least). So, we can explore the idea of using a delay after login, and testing the endpoint after that. +Currently logging in takes a lot of time (compared to student profile, at least). So, we can explore the idea of using a delay after logging in, and testing the endpoint after that. We can also explore elements like Timers and JSON Extractors. By synchronizing, Timer JMeter **Spike Testing** can be achieved. From 08d0e98eb268b8c898b263dc118966e93f0cbebd Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Sun, 21 Apr 2019 18:17:51 +0800 Subject: [PATCH 14/19] Minor nits --- technical-reports/Performance-Testing.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 38c80bb..e50110f 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -69,14 +69,14 @@ We explored both possibilities but ended up using the JMeter Java API. Some key A brief description of the process: -* Determine the failure threshold criteria according to which endpoint is being tested. +* Determine the failure threshold criteria for the test according to which endpoint is being tested. * Create a test JSON and CSV file for the test. * The purpose of the JSON and CSV files are to store data that is needed to test the endpoints. With the data stored in these files we can parameterize HTTP requests and simulate multiple users accessing the endpoint being tested. * Since the data files are large (at least 5 times the size of test data used for E2E tests), they are not committed to the repo. This way, we can easily change the scale of the test without having to rewrite the code for generating the data. * Create the JMeter test and run. - * Each test configures the test plan, similar to how it is done in the GUI. We also considered using a Builder pattern, but it did not make complete sense to do so (since we can not say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. + * Each test configures the test plan, similar to how it is done in the GUI. We also considered using a Builder pattern, but it did not make complete sense to do so (since we cannot say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. * Display the summarised results for that endpoint. @@ -100,6 +100,6 @@ This is still a work-in-progress as we are yet to consolidate the results but ou ## Future Work We need to fine-tune the L&P test parameters and set suitable thresholds for failure. These should align with the goals of the application. -Currently logging in takes a lot of time (compared to student profile, at least). So, we can explore the idea of using a delay after logging in, and testing the endpoint after that. +Currently, logging in takes a lot of time (compared to student profile, at least). So, we can explore the idea of using a delay after logging in, and testing the endpoint after that. We can also explore elements like Timers and JSON Extractors. By synchronizing, Timer JMeter **Spike Testing** can be achieved. From 5160718e88046bd9b9cee9b00037f19df93527dd Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Mon, 22 Apr 2019 00:26:04 +0800 Subject: [PATCH 15/19] Update --- technical-reports/Performance-Testing.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index e50110f..2278c06 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -19,8 +19,8 @@ This report gives a brief overview of the profiling operations performed on TEAM TEAMMATES is one of the biggest student projects in the open source community. As of April 2019, TEAMMATES boasts a codebase with ~130 KLoC. More importantly, it has over 350,000 users. Maintaining such a project demands high quality standards to ensure long term survival. -There are many factors that can cause decelerating performance of the production software like increased number of database records, increased number of simultaneous requests to the server, and a larger number of users accessing the system at any given point. -This means, continuously monitoring code health and product performance. To do so, we need to be able to identify performance issue-prone operations with a quantitative measure so that they can be rectified. +There are many factors that can cause degrading performance of the production software like increased number of database records, increased number of simultaneous requests to the server, and a larger number of users accessing the system at any given point. +This means, continuously monitoring code health and product performance in order to ensure optimal performance of the software at all times. To do so, we need to be able to identify performance issue-prone operations with a quantitative measure so that they can be rectified. ## Overview of Solution @@ -60,7 +60,7 @@ Some other reasons why we found JMeter to be useful: ## Current implementation of the solution -JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like jmeter-gradle plugin and the JMeter Java API. +JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) and the JMeter Java API. We explored both possibilities but ended up using the JMeter Java API. Some key observations we made: * The jmeter-gradle-plugin is not well maintained and does not have easy-to-find documentation. The existing resources are outdated and are not in sync with the latest version of JMeter. @@ -71,12 +71,12 @@ A brief description of the process: * Determine the failure threshold criteria for the test according to which endpoint is being tested. -* Create a test JSON and CSV file for the test. - * The purpose of the JSON and CSV files are to store data that is needed to test the endpoints. With the data stored in these files we can parameterize HTTP requests and simulate multiple users accessing the endpoint being tested. +* Create the test files for the endpoint. + * The purpose of the test files is to store data that is needed to test the endpoints. With the data stored in these files we can parameterize HTTP requests and simulate multiple users accessing the endpoint being tested. * Since the data files are large (at least 5 times the size of test data used for E2E tests), they are not committed to the repo. This way, we can easily change the scale of the test without having to rewrite the code for generating the data. * Create the JMeter test and run. - * Each test configures the test plan, similar to how it is done in the GUI. We also considered using a Builder pattern, but it did not make complete sense to do so (since we cannot say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. + * Each test file configures the test plan, similar to how it is done in the GUI. We also considered using a Builder pattern, but it did not make complete sense to do so (since we cannot say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. * Display the summarised results for that endpoint. From 168547adc512418dad39576dbacd0c1b0229e2c8 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Mon, 22 Apr 2019 00:42:54 +0800 Subject: [PATCH 16/19] Update changes --- technical-reports/Performance-Testing.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 2278c06..3fa54f6 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -20,7 +20,7 @@ This report gives a brief overview of the profiling operations performed on TEAM TEAMMATES is one of the biggest student projects in the open source community. As of April 2019, TEAMMATES boasts a codebase with ~130 KLoC. More importantly, it has over 350,000 users. Maintaining such a project demands high quality standards to ensure long term survival. There are many factors that can cause degrading performance of the production software like increased number of database records, increased number of simultaneous requests to the server, and a larger number of users accessing the system at any given point. -This means, continuously monitoring code health and product performance in order to ensure optimal performance of the software at all times. To do so, we need to be able to identify performance issue-prone operations with a quantitative measure so that they can be rectified. +It is important to continuously monitor code health and product performance in order to ensure optimal performance of the software at all times. To do so, we need to be able to identify performance issue-prone operations with a quantitative measure so that they can be rectified. ## Overview of Solution @@ -44,7 +44,7 @@ Some of the tools that we considered before deciding on JMeter were: ## Reasons for using JMeter -One of the main reasons we use JMeter over the other tools was the **extensive documentation** we found online. There are a number of resources to help one to get started. Some of which we have listed below: +One of the main reasons we use JMeter over the other tools was the **extensive documentation** we found online. There are a number of resources to help one to get started, some of which are: * [JMeter Tutorial for beginners](https://www.guru99.com/jmeter-tutorials.html) * [How to use JMeter](https://www.blazemeter.com/blog/how-use-jmeter-assertions-three-easy-steps) @@ -60,7 +60,7 @@ Some other reasons why we found JMeter to be useful: ## Current implementation of the solution -JMeter offers us a couple of ways to perform the tests. We had the choice of performing these tests with automating tools like [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) and the JMeter Java API. +JMeter offers us a couple of ways to perform the performance tests. We had the choice of performing these tests with automating tools like [jmeter-gradle plugin](https://github.com/jmeter-gradle-plugin/jmeter-gradle-plugin) and the JMeter Java API. We explored both possibilities but ended up using the JMeter Java API. Some key observations we made: * The jmeter-gradle-plugin is not well maintained and does not have easy-to-find documentation. The existing resources are outdated and are not in sync with the latest version of JMeter. @@ -76,7 +76,7 @@ A brief description of the process: * Since the data files are large (at least 5 times the size of test data used for E2E tests), they are not committed to the repo. This way, we can easily change the scale of the test without having to rewrite the code for generating the data. * Create the JMeter test and run. - * Each test file configures the test plan, similar to how it is done in the GUI. We also considered using a Builder pattern, but it did not make complete sense to do so (since we cannot say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. + * Each test file configures the test plan, similar to how it is done in the GUI. We also considered using Builder pattern, but it did not make complete sense to do so (since we cannot say for sure what the components of the class are, and what order they should be in). Instead, we have created abstractions and default configurations which make it easier to create new tests. * Display the summarised results for that endpoint. @@ -94,7 +94,7 @@ Currently, the performance issue-prone operations in TEAMMATES are as follows: * Student page: Submitting a feedback session when the number of questions is large -Our aim is to test these endpoints extensively and get metrics such as latency, throughput and other relevant results. +Our aim is to test the performance of these endpoints extensively and get metrics such as latency, throughput and other relevant results. This is still a work-in-progress as we are yet to consolidate the results but our goal is to generate reports that will help the developers understand the performance of each endpoint. ## Future Work From 62a59f12a24a8a3530ad960f5cf238ff746de8a6 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Mon, 22 Apr 2019 02:46:08 +0800 Subject: [PATCH 17/19] Change headers --- technical-reports/Performance-Testing.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 3fa54f6..582fde0 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -3,8 +3,8 @@ Authors: [Ronak Lakhotia](https://github.com/RonakLakhotia), [Amrut Prabhu](https://github.com/amrut-prabhu) and [Jacob Li PengCheng](https://github.com/jacoblipech) * [Introduction](#Introduction) -* [Problem](#Problem) -* [Overview of the Proposed Solution](#Overview-of-Solution) +* [Why do we need Performance Testing?](#Why-do-we-need-Performance-Testing?) +* [Overview of the Proposed Solution](#Overview-of-Proposed-Solution) * [Tools considered for Performance Testing](#Tools-considered-for-Performance-Testing) * [Reasons for using JMeter](#Reasons-for-using-JMeter) * [Current implementation of the solution](#current-implementation-of-the-solution) @@ -15,14 +15,14 @@ Authors: [Ronak Lakhotia](https://github.com/RonakLakhotia), [Amrut Prabhu](http This report gives a brief overview of the profiling operations performed on TEAMMATES. In particular, it includes a detailed discussion of the Load and Performance (L&P) testing framework and the process we followed. -## Problem +## Why do we need Performance Testing? TEAMMATES is one of the biggest student projects in the open source community. As of April 2019, TEAMMATES boasts a codebase with ~130 KLoC. More importantly, it has over 350,000 users. Maintaining such a project demands high quality standards to ensure long term survival. There are many factors that can cause degrading performance of the production software like increased number of database records, increased number of simultaneous requests to the server, and a larger number of users accessing the system at any given point. It is important to continuously monitor code health and product performance in order to ensure optimal performance of the software at all times. To do so, we need to be able to identify performance issue-prone operations with a quantitative measure so that they can be rectified. -## Overview of Solution +## Overview of Proposed Solution The idea behind L&P tests is to simplify the process of understanding production performance and enable the developers to address bottlenecks before they become genuine production issues. Implementing these tests involves a few key points: From b5164f587f9f7fe3cea0b689ad636eb92b687246 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Mon, 22 Apr 2019 02:49:35 +0800 Subject: [PATCH 18/19] Link report --- technical-reports/README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/technical-reports/README.md b/technical-reports/README.md index fc6febd..48b89a2 100644 --- a/technical-reports/README.md +++ b/technical-reports/README.md @@ -3,13 +3,14 @@ > These are the in-depth descriptions about various aspects of the project. > They were written to be relevant only at the point of writing, thus many of its contents may be outdated now. -* [Measuring the scalability and performance of TEAMMATES](Measuring-scalability-and-performance.md) - by Samson Tan Min Rong (Apr 2017) +* [Performance Testing in TEAMMATES](Performance-Testing.md) (April, 2019) - by Ronak Lakhotia, Amrut Prabhu and Jacob Li PengCheng +* [Measuring the scalability and performance of TEAMMATES](Measuring-scalability-and-performance.md) (Deprecated) - by Samson Tan Min Rong (Apr 2017) The following reports were authored before 2015: * [An Analysis of Question Types](https://docs.google.com/document/d/1SH8VkaUH_kv3bT3c8AKiPDJS2Y-XhzZvNb4umavmfCE/pub?embedded=true) - by Low Weilin -* [Measuring Scalability and Performance](https://docs.google.com/document/pub?id=1C7fn11fKsgGUx0AT_nH9ZQBi3G7o5zpYqwIIAC40CxU&embedded=true) - by James Dam Tuan Long -* [Improving Scalability and Performance](https://docs.google.com/document/pub?id=1v_RYw_Hu1-TExVi0A7d3kxX0CTgFaUtfV1_qYXBhwWs&embedded=true) - by James Dam Tuan Long +* [Measuring Scalability and Performance](https://docs.google.com/document/pub?id=1C7fn11fKsgGUx0AT_nH9ZQBi3G7o5zpYqwIIAC40CxU&embedded=true) (Deprecated) - by James Dam Tuan Long +* [Improving Scalability and Performance](https://docs.google.com/document/pub?id=1v_RYw_Hu1-TExVi0A7d3kxX0CTgFaUtfV1_qYXBhwWs&embedded=true) (Deprecated) - by James Dam Tuan Long * [Data Backup and Disaster Recovery](https://docs.google.com/document/d/1ECDOy2JUXKLz8t44lXj2t0nvqDtJCjyHM7_HA8DV1fA/pub?embedded=true) - by Lee Shaw Wei Shawn * [Dealing with Eventual Consistency](https://docs.google.com/document/d/11HUDa-PlzEEk4-liWlsjC9UbicbfYO1hJMxx_cCEEVE/pub?embedded=true) - by Lee Shaw Wei Shawn * [Dealing with Intermittent Null Pointer Exceptions](https://docs.google.com/document/d/1A_QtW8uDFGeeu2KOiWwyuvgm7Jm9pS7nOvmy9B42v_I/pub?embedded=true) - by Lee Shaw Wei Shawn From e979555d19aaed4f1c27c737a0b259ec11f05270 Mon Sep 17 00:00:00 2001 From: Ronak Lakhotia Date: Mon, 22 Apr 2019 03:27:13 +0800 Subject: [PATCH 19/19] Minor fixes --- technical-reports/Performance-Testing.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/technical-reports/Performance-Testing.md b/technical-reports/Performance-Testing.md index 582fde0..8ccc76b 100644 --- a/technical-reports/Performance-Testing.md +++ b/technical-reports/Performance-Testing.md @@ -3,8 +3,8 @@ Authors: [Ronak Lakhotia](https://github.com/RonakLakhotia), [Amrut Prabhu](https://github.com/amrut-prabhu) and [Jacob Li PengCheng](https://github.com/jacoblipech) * [Introduction](#Introduction) -* [Why do we need Performance Testing?](#Why-do-we-need-Performance-Testing?) -* [Overview of the Proposed Solution](#Overview-of-Proposed-Solution) +* [Why do we need Performance Testing?](#Why-do-we-need-Performance-Testing) +* [Overview of Proposed Solution](#Overview-of-Proposed-Solution) * [Tools considered for Performance Testing](#Tools-considered-for-Performance-Testing) * [Reasons for using JMeter](#Reasons-for-using-JMeter) * [Current implementation of the solution](#current-implementation-of-the-solution)