Performance Data Set
Agenda: To come up with a plan for doing performance testing on Chandler with loads of data.
Attendees: Katie, Ted & Aparna
1. How do we want to do the performance testing on Chandler?
Performance testing on Chandler will be done at 2 levels:
- The repository storage format and the repository core schema are stable enough that we can use bigger loads of data to test performance.
- We need cannonical data set that can be loaded repeatedly on a fresh repository before running the tests.
- This data will be used to run headless tests on the repository without going thru the application.
- Ted has already checked in some stress tests but we need to add more python tests to the current set for better coverage.
- When the application gets stable, we will use Silk Test to automate user actions and use Silk Test's test manager to record the timings for the various bulk actions.
- By then if we have a set of CPIA level unit tests then those would also be used to get some performance numers.
2. What kind of data do we need for these tests?
We could use one or more of the following options:
- A calendar with lots of data. To start with we could use the office calendar or Mitch's personal calendar
- Snapshot of the office calendar that is mocked up to add more edge cases
- Multiple data sets per persona
- Random data set generated by a script (harder to do validation on such data) and add this option under the Test menu.
- Real life data created in a spreadsheet and using python scripts load it into the repository.
3. What we already have?
- Ted has checked in a tar file of data which is a huge directory full of rss feeds. Ted's scripts look for this directory to be present and loads the feeds into the repository, copies the repostory and then runs tests on it. -- correction: Ted has checked in his tests but not the data files because that amounts of 300 MB of stuff people would be checking out.
4. What are the action items from here?
- Setup a process to run tinderbox build on winxp machine which builds chandler and then runs the performance tests on it. This is a short term solution. Long term we want to set this up on all the supported platforms. - Aparna to work with Chris to identify a winxp box for this purpose.
- Run Ted's stress test scripts and validate they still run with the current chandler code base - Ted
- Meet with Sheila and come up with a list of scenarios for end-user tests - Katie and Aparna
- Run some primitive tests after loading the office calendar and evaluate the performance - Aparna
- Come up with a set of realistic data in excel and Ted will figure out a way to load that into chandler repository using Python libraries - we still need to define what this realistic data is?
- When the app is more stable, develop scripts in Silk Test to test and measure the timings of various actions - Aparna
- Possibly get an intern to help with overall performance testing of chandler.