Load Test Integrated » History » Version 6
Chooi-Mey, 12/03/2010 09:54 AM
1 | 2 | chin-yeh | {{toc}} |
---|---|---|---|
2 | |||
3 | 1 | chin-yeh | h1. Load Test Result on the fully integrated system |
4 | 2 | chin-yeh | |
5 | The purpose of this test is to examine the performance on the fully integrated system. |
||
6 | |||
7 | Was tested using the following modules set: |
||
8 | |_.Name|_.Version| |
||
9 | |ecosway-adaptor|0.4.4-SNAPSHOT| |
||
10 | |ecwyadaptor|0.0.1-SNAPSHOT| |
||
11 | |ecwyconnector|1.0.1-SNAPSHOT| |
||
12 | |jms-adaptor|0.6.2-SNAPSHOT| |
||
13 | |soap-adaptor|0.5.4-SNAPSHOT| |
||
14 | |us-beans|1.1-SNAPSHOT| |
||
15 | |ws-mimic|2.4.2-SNAPSHOT| |
||
16 | |ws-status|1.1.1-SNAPSHOT| |
||
17 | |xml-mapper|1.4-SNAPSHOT| |
||
18 | |build-ecosway-adaptor|1.0-SNAPSHOT| |
||
19 | |||
20 | > The logging level for all modules are set to <code>INFO</code>. |
||
21 | |||
22 | 5 | chin-yeh | Test Platform: |
23 | * 4 cores CPU |
||
24 | * 5GB RAM |
||
25 | * 72GB 10K RPM x 8 |
||
26 | 2 | chin-yeh | |
27 | h2. Test Areas |
||
28 | |||
29 | This test consists of the following area: |
||
30 | * from *ecosway-adaptor* to *web service mimic* |
||
31 | * from *web service mimic* to *dot com web service* |
||
32 | * from *web service mimic* to *synchronization status's listener* |
||
33 | |||
34 | h2. Test Result |
||
35 | |||
36 | The result consists of 2 parts: |
||
37 | * Shopper Registration |
||
38 | * BO or VIP Shopper Registration |
||
39 | |||
40 | h3. Shopper Registration |
||
41 | |||
42 | * from *ecosway-adaptor* to *web service mimic*: |
||
43 | |_.Test #|_.Time Taken (ms)| |
||
44 | |200 messages|11877| |
||
45 | |200 messages|13039| |
||
46 | |200 messages|15345| |
||
47 | |200 messages|14336| |
||
48 | |200 messages|11471| |
||
49 | |200 messages|16529| |
||
50 | |200 messages|13792| |
||
51 | |200 messages|10773| |
||
52 | |200 messages|14270| |
||
53 | |200 messages|12648| |
||
54 | |*Total*|134080| |
||
55 | |*Average*|13408| |
||
56 | |||
57 | * from *web service mimic* to *dot com web service*: |
||
58 | 4 | chin-yeh | ** average time taken: *99.285* ms |
59 | 2 | chin-yeh | |
60 | * from *web service mimic* to *synchronization status's listener* |
||
61 | 4 | chin-yeh | ** average time taken: *79.055* ms |
62 | 2 | chin-yeh | |
63 | * Summary: |
||
64 | |_.Name|_.Description| |
||
65 | 4 | chin-yeh | |Time taken to process 1 message in average|67.04 ms + 99.285 ms + 79.055 ms = *245.38* ms| |
66 | |Total message size|*742507* bytes| |
||
67 | |Average message size|*3712.535* bytes| |
||
68 | |max number of messages| *horneq's maximum message size in bytes* / *message size* = 10485760 / 3712.535 = *2824* messages| |
||
69 | 2 | chin-yeh | |
70 | |||
71 | |||
72 | h3. BO or VIP Shopper Registration |
||
73 | |||
74 | * from *ecosway-adaptor* to *web service mimic*: |
||
75 | |_.Test #|_.Time Taken (ms)| |
||
76 | |40 messages|5314| |
||
77 | |40 messages|1877| |
||
78 | |40 messages|2246| |
||
79 | |40 messages|2582| |
||
80 | |40 messages|5327| |
||
81 | |40 messages|2415| |
||
82 | |40 messages|2323| |
||
83 | |40 messages|5386| |
||
84 | |40 messages|1854| |
||
85 | |40 messages|5823| |
||
86 | |*Total*|35147| |
||
87 | |*Average*|3514.7| |
||
88 | |||
89 | * from *web service mimic* to *dot com web service*: |
||
90 | 4 | chin-yeh | ** average time taken: *152.8* ms |
91 | 2 | chin-yeh | |
92 | * from *web service mimic* to *synchronization status's listener* |
||
93 | 4 | chin-yeh | ** average time taken: *54.75* ms |
94 | 2 | chin-yeh | |
95 | * Summary: |
||
96 | |_.Name|_.Description| |
||
97 | 4 | chin-yeh | |Time taken to process 1 message in average|87.8675 ms + 152.8 ms + 54.75 ms = *295.4175* ms| |
98 | |Total message size|*374995* bytes| |
||
99 | |Average message size|*9374.875* bytes| |
||
100 | |max number of messages| *horneq's maximum message size in bytes* / *message size* = 10485760 / 9374.875 = *1118* messages| |
||
101 | 6 | Chooi-Mey | |
102 | |||
103 | * Feedback by CM (03 Nov 2010): |
||
104 | ** Based on discussion, while queue pool is full (during peak time) till not able to accept any new incoming request, instead of request is wait till no end which will impact on frontend app waiting time, go for approach that drop the request packet or set the request timeout. As request has already been stored in Event Log table, which can be pick up for re-try during re-try daemon execution later. |
||
105 | ** Please test on the approach of drop packet/request timeout setting. |