Advanced Computer System Exam
Haining Tong (znr113)
January 2018
Contents
1 Scenarios – Data Processing 2
1.1 State Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Count I/O Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Scenarios – Replication 4
3 Scenarios – ARIES Recovery 5
3.1 undonextLSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Transaction table and Dirty page table . . . . . . . . . . . . . . . 6
3.3 Additional Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4.1 a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4.2 b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Programming Task – Question1
High-Level Design Decisions, Modularity, Fault-Tolerance 7
5 Programming Task – Question2
Before-or-After Atomicity of Cart 10
6 Programming Task – Question3
Before-or-After Atomicity of Supermarket 10
7 Programming Task – Question4
Testing 12
7.1 Basic introduction for Test . . . . . . . . . . . . . . . . . . . . . 12
7.2 before-or-after atomicity test . . . . . . . . . . . . . . . . . . . . 12
7.3 RPC, fail-soft test . . . . . . . . . . . . . . . . . . . . . . . . . . 13
8 Programming Task – Question5
Experiments 13
9 Reference 14
1
1 Scenarios – Data Processing
1.1 State Algorithm
Our task can be split to: find all the users who had used small cars, get all the
users Id, and finally get the disjoint of these two set. Then we can get the users
who had only rented big cars. My solution is to use tournament sort (heapsort)
algorithm. And defines the size of input,output buffers are the size of every item
record in each table. The steps should be as follows:(Assume the table holds
cars data named ”car table”, the table holds rental data named ”rental table”)
• First: Use tournament sort to sort terms in car table based on their car id
to get a sorted car table.
• Second: Create an empty table ”small table” in disk to hold all the users
that have ever used small cars.
• Third: Project the rental table into a user table(eliminate userId dupli-
cate), and insert correct userId into small table. The detail operations
are as follows: Use tournament sort to sort terms in rental table based
on their user id . Comparing to normal tournament sort, my algorithm
should change as follows: In phase 1, every time before the item in out-
put buffer write into disk, we check the size category of car in this item
using the sorted car table we get on the first step. If big: write this term
directly to disk; if small: except insert this item into disk, we also insert
the user id into small table we create in step 2. According to the question
clarifies, our server’s memory is larger than the square root of any table,
but insufficient to hold any one of them completely. So there will be only
one pass in Phase 2. Before the items finally been writing back in Phase
2 to the ”user table” in disk, we should make sure to project it if it is
already exist for making only one entry for each userId left. So in this
step we can get a sorted rental table and a small table holds all users ever
rent small cars.
• Forth: Use tournament sort to sort terms in small table
• Fifth: Get the disjoint table of user table and small table: user table/small table.
So in this step we get our target set of users ”user table” holds users who
have only rent big cars
The total workload is the cost of three times sort + create and insert items
into a table + disjoint two sorted table. As I already sort the user table and
small table, the disjoint operation would not cost too much IO cost. Although
the sort work cause some IO cost, it helps us avoid using nested loop in disjoint.
So I think my algorithm is correct and effectively.
Compared to normal external merge sort, I tend to use tournament sort more
because using tournament sort to deal with data should normally be more ef-
fectively. As the runs we get on each step will be larger in average, then we can
2
decrease the number of passes in phase 2 of algorithm. (But in our question, as
it already give the size of memory that: ” You are provided with one server with
memory larger than the square root of any table, but insufficient to hold any one
of them completely”. Tournament sort would not have that much advantage
than normal merge sort in fact.)
1.2 Count I/O Cost
Assume the size of car table is m, the size of rental table is n, except overhead
for system and algorithm the memory can hold B terms. Then the cost of each
steps are:
• First:
– Cost of Phase 1 = 2N
As I choose to use heapsort, the average length of a run we get in phase
will be 2(B-2). If # passes in Phase 2 is P then: B(2(B − 2))P = N
–
p =
⌈
log2(B−2) dN/Be
⌉
– cost of each pass = 2N
– cost of phase 2 = 2N ∗
⌈
log2(B−2) dN/Be
⌉
So the total cost is: 2N ∗ (
⌈
log2(B−2) dN/Be
⌉
+ 1). But according to the
question, ” You are provided with one server with memory larger than
the square root of any table, but insufficient to hold any one of them
completely”. Using the result of our slides of lecture: lecture-12, P31,
our situation satisfies that sqrt(N) < B < N , means there would be
only one pass in Phase 2, So the IO cost result of this step is in fact
2N ∗ (1 + 1) = 4N
• Second: Because we only create an empty table, the IO cost is 1.
• Third:
– cost to get a sorted user table by projecting rental table is
similar to first step
= 2M ∗ (
⌈
log2(B−2) dM/Be
⌉
+ 1)
– cost to get small table: Assume the average case is half of the
record are small cars, that is M/2.(will be used in counting cost of
Forth step)
Because what we do is check index of car table in disk for each term
in M, the cost will be M
3
So the total cost is : 2M∗(
⌈
log2(B−2) dM/Be
⌉
+1)+M = 2M∗(
⌈
log2(B−2) dM/Be
⌉
+
1.5).
But similar to step1, according to the question, sqrt(M) < B < M , So
there would also be only 1 pass in Phase 2, the IO cost of this step is in
fact:2M ∗ (1 + 1.5) = 5M
• Forth: Using the assumption in aforementioned step three, the size of
total data in small table is M/2, so the cost of this step is similar to the
first step: = 2 ∗ (M/2) ∗ (
⌈
log2(B−2) d(M/2)/Be
⌉
+ 1)
= M ∗ (
⌈
log2(B−2) dM/2Be
⌉
+ 1) = M ∗ (1 + 1) = 2M
• Fifth: Our work is similar to projection to rental table but add a check
operation before we write into disk.
– total steps = M
– cost of check disjoint with small table in each step = 1. Because the
small table is already sorted.
So the IO cost of each step is 1, the total cost is M.
So the total cost = 4N + 1 + 5M + 2M + M = 4N + 8M + 1
2 Scenarios - Replication
Event Vector Clock at
Process Before
Message Vector
Clock
Action Token by
Process
Vector Clock at
Process After
A (0,0,0) (1,0,0) A2 (1,0,0)
B (0,0,0) (0,1,0) A2 (0,1,0)
C (0,1,0) (1,0,0) A3 (1,1,0)
D (0,0,0) (0,0,1) A2 (0,0,1)
E (1,1,0) (1,1,0) A1 (1,1,0)
F (0,0,1) (1,1,0) A3 (1,1,1)
G (1,0,0) (0,1,0) A3 (1,1,0)
H (1,1,0) (0,0,1) A3 (1,1,1)
I (1,1,1) (1,1,1) A1 (1,1,1)
J (1,1,0) (0,0,1) A3 (1,1,1)
K (1,1,1) (1,1,1) A1 (1,1,1)
Brief justification:
• Vector compare: To determine whether should we update vector clock
of a process after it receiving a message we need to define how to compare
the vector clock of messages and processes. Assume the vector clock of pro-
cess is Vp = (p1, p2, p3), the vector clock of message is Vm = (m1,m2,m3).
We say that vp < vm, standing for the process is ”at least partly older
than” the message (and it should be update) if there exits any 1 ≥ i ≥ 3
4
satisfying pi < mi. Otherwise, we say that the process is ”completely
newer than” the vector: vp ≥ vm
• Action Token: when a process receive a message, we should compare
their vector clock using the compare method defined aforementioned. If
pi < mi, the process should take A2 or A3: when the update is caused
by client, take A2; when the update is caused by receiving message from
other process, take A3. If vp ≥ vm, we do not need to update the vector
clock of the process, so we take A1.
• Other simple situations: When a process send out a message, the
vector clock of message should be the vector clock of process at that certain
time.
When a process receives a message with same vector clock, the process
takes A1 because it does not need to do any update.
These are how I construct my table.
3 Scenarios - ARIES Recovery
3.1 undonextLSN
For convenient I will use number standing for the operation with corresponding
LSN. (e.g. (5) stands for the operation with LSN = 5)
• A = 5 Because this CLR with LSN = 13 corresponds to the operation
: undo (7). According to the table the XACT ID of (7) is T2, and its
PREV LSN = 5. So the UNDONEXTLSN of (13) should be set to 5.
• B = 4 Because this CLR with LSN = 15 corresponds to the operation
: undo (9). According to the table the XACT ID of (9) is T1, and its
PREV LSN = 4. So the UNDONEXTLSN of (15) should be set to 4.
• C = 3 Because this CLR with LSN = 16 corresponds to the operation
: undo (4). According to the table the XACT ID of (4) is T1, and its
PREV LSN = 3. So the UNDONEXTLSN of (16) should be set to 3.
• D = NULL Because this CLR with LSN = 17 corresponds to the oper-
ation : undo (5). According to the table the XACT ID of (5) is T2, and
its PREV LSN = NULL. So the UNDONEXTLSN of (17) should be set
to NULL.
• E = NULL Because this CLR with LSN = 18 corresponds to the oper-
ation : undo (3). According to the table the XACT ID of (5) is T1, and
its PREV LSN = NULL. So the UNDONEXTLSN of (18) should be set
to NULL.
5
3.2 Transaction table and Dirty page table
Transactions Table
TransID Status LastLSN
T3 active 6
T2 active 17
Explain: Because among these four transactions, T4 is a winner which has
succeeded to commit before crash and already end, so T4 should be removed
from Transactions Table. T1 has also been ended before crash after it aborted
and undo, so it should also be removed from Transactions Table. The last two
transactions T2 and T3 has no end log, so they should be kept in the Trans-
actions Table. The corresponding status of them are both ACTIVE: T3 is in
progress and T2 is being undo. The corresponding LastLSN are 6 and 17 which
are the last operation of them according to the Log.
Dirty Pages Table
PageID RecLSN
P3 3
P1 4
P2 7
P4 9
Explain: There are P1,P2,P3,P4 four pages been written since the checkpoint.
The RecLSN stands for the oldest LSN writting to the certain page. Because
our system use steal and no force strategy for improving performance. The the
default set No Force determines all records in our dirty will not be removed
unless flushed, regardless of the corresponding transactions end or not.
3.3 Additional Log
Additional Contents of the Log
LSN PREV LSN XACT ID TYPE PAGE ID UNDONEXTLSN
20 17 T2 end - -
21 6 T3 abort - -
22 21 T3 CLR P3 NULL
23 22 T3 end - -
Explain: According to the Dirty Page Table I get on Point 3.2, the redo
will start at LSN = 3. And according to Transactions table, undo should end at
the smallest LSN of transaction which are still active in the transactions table,
so the undo should end at LSN = 5.
Using the result I get in Point 3.1, the undo of T2 has already finished except
6
a log for end. So I end T2 firstly. For T3 I abort it, then undo the changes it
writes to P3, at last I end it.
We do not undo the CLR, because we never undo the undo operation. So
additional logs are what I get.
3.4
3.4.1 a
When a transaction is committed, the log tail is forced to stable storage. So in
the scenario of the question, the action with LSN = 10 of T4 will trigger writes
to stable storage.
Because besides writing every log record to stable storage, commit is a special
action. The definition of a committed transaction is effectively ’a transaction
all of whose log records, including a commit record, have been written to stable
storage’. So if we do not write to stable storage, as our system is no force, there
would be no way to ensure that the changes of a committed transaction survive
crashes.
3.4.2 b
Redo would not be written to stable storage.
As we know, the redo phase begins with the log record that has the smallest
recLSN of all pages in the dirty page table constructed by the Analysis pass.
Starting from that log record, Redo scans forward until the end of the log. For
each redoable log record (update or CLR) encountered, Redo plays all the action
again after check the effect of actions before the checked LSN is still exist.(Effect
not stands for some situation like the affected page is no more in the dirty page
or the page was dirtied after the checked LSN)
This repeating history paradigm distinguishes ARIES from other proposed
WAL-based recovery algorithms and causes the database to be brought to the
same state it was in at the time of the crash.
Because of the properties of redo mentioned above, we know that according
to our repeating history algorithm in redo phase we only need redo according to
the log. So we do not need to record the redo log to stable storage. Whenever we
meet with crash, we just redo every update (chosen by the method mentioned in
two paragraphs above). For dealing with no force strategy, Redo would update
all the pages not have been flushed as what they should be before the crash.
4 Programming Task - Question1
High-Level Design Decisions, Modularity, Fault-
Tolerance
overall code organization :
7
I implement Cart as CertainCart and Supermarket as CertainSupermarket
in com.acertainsupermarket.business. I also module them for RPC by creating
two proxy: CloudHTTPProxy and EdgeHTTPProxy in com.acertainsupermarket.client.
Their corresponding servers and messageHandlers are located in com.acertainsupermarket.server.
In CertainSupermarket I define two maps: supermarketLock and itemMap.
The former is for holding locks which I will introduce in detail on Question2.
The later is designed for holding all the Items and their information. Their Ids
are set as the key of the itemMap and their self (MutableItem) are set as value.
Because there are no API for initialing the items in Supermarket declared in the
interface handed out to us, and the question said that all items information is
fixed and known a priori in our exam task. I decided to define some default items
in com.acertainSupermarket.utils.SupermarketConstants and add them into the
itemMap in constructor function.
Similarly, in CertainCart I also defined two maps cartLock and cartMap.
The cartMap is a map from Integer to a list of MutableCartItem. The In-
tegers are the CartIDs of cart, and their list of MutableCartItem stands for
Items already been putted in this cart. I defined the fixed CartId in Super-
marketConstants and add them to the constructor, mapping to an empty list
at the beginning. CertainCart also hold an instance of Supermarket named
beloningSupermarket, which standing for the Cloud Server(Supermarket) this
Edge Server(Cart) belonging to. The beloningSupermarket is used when Cart
need to invoke functions of Supermarket: in add and checkout.
I build EdgeHTTPProxy and CloudHTTPProxy and their corresponding
server for module cart and supermarket. For implement the RPC between the
clients and the Cart and the Supermarket, I create an instance of CertainCart in
EdgeHTTPMessageHandler and CertainSupermarket in CloudHTTPMessage-
Handler. All the calls from client will be handled by invoking corresponding
functions of instances in each proxy.
For implement the RPC between Cart and Supermarket, I set an additional
kind of message with tag called SETBELONG. This kind of message would
invoke the initializeCertainCart of CertainCart, which will assign the beloning-
Supermarket aforementioned. I tried to transfer Supermarket in a set inside a
SupermarketRequest but failed. The console complained :”Cannot marshal the
XStream instance in action”. It seems instance of CloudHTTPProxy can not
be transform in XStream. So finally I choose to transfer the address of Cloud
Server as String to Edge Server. And I called initializeCertainCart with the
argument of Cloud Server this address corresponding to. Then the Edge Server
will invoke functions of this Cloud Server, implement the RPC between Cart
and Supermarket.
Similar to our assignments, my system is test-driven, I create three test files
corresponding to three kinds of clients: Mobile Client, Store Client and Admin
Client.
RPC semantics :
At-most-once semantics is implemented both between client and the Cart
and the Supermarket, and between the Cart and the Supermarket Server. There
8
are three semantics in design: Exactly once, At-most-once and At-least-once,
and in this case, the architecture in design is demanded to ensure there would
be no retry on invoke the function.The difference between At-least-once and
At-most-once is that their failure mode is different.If At-least-once is applied
here, that the function might be re-invoke more than once. The reason to
choose At-most once between At-most-once and Exactly once semantic is that
At-most-once is a safer method in system design consider there could be oper-
ation or system crash, and At-most-once has better performance in this case.
(partly quoted from My report of Assignment 1, Point 3.4)
Failure:
My failure containment implementation guarantees a fail-soft design for the
Cart instances with respect to the Supermarket instance by set Cart to catch all
the possible Exceptions as described below Supermarket may throw. My Cart
will only invoke the functions of Supermarket in add and checkout functions.
so if Supermarket crashes, the left functions of Cart would not be affected and
can still work.
InexistentItemException and SupermarketException are contained at Cart
instance. Because there are two functions in Certain Cart will invoke the func-
tions of CertainSupermarket: add and checkout. The add calls getItems when
adding a new item if there is no such kind item in the cart yet. This may cause
the Supermarket throw a InexistentItemException if there are no such ItemId
store in Supermarket, the add operation should fail then. My CertainCart would
catch this exception and inform the exception to client. The checkout calls up-
dateStocks of Supermarket. If there are not enough stock the Supermarket
would throw SupermarketException, then CertainCart will catch it and inform
the client.
all-or-nothing atomicity :
• I achieve all-or-nothing atomicity at the Cart service by spliting the oper-
ations of check and implement. For the cases where exceptions are propa-
gated from the Supermarket service, such as in function ”add” my Certain-
Cart would inovke getItems of Supermarket twice. First time for check
and Second time for implement add the certain Item into Cart. This
would certainly affect performance. So this design is an offset between
all-or-nothing atomicity and performance.
• I achieve all-or-nothing atomicity at the Supermarket service in the same
way, by spliting the operations of check and implement. Because there are
multiple input of API getItems and updateStocks, I create two loops, one
for check every element of input List or Set, the other for implement the
certain operation.
9
5 Programming Task - Question2
Before-or-After Atomicity of Cart
Method
I use two phase Lock strategy for ensuring serializability at Cart instance. I
create a one a map called cartLock, mapping from CartId to its corresponding
ReadWriteLock.
correctness
My method is logically equivalent to a Conservative Strict Two Phase Lock-
ing. Because all the APIs requires a certain CartId as argument, my method
would lock the corresponding Lock of that Cart before doing certain work. After
work finish, I unlock it. This satisfy the definition of Conservative Strict Two
Phase Locking, I lock all the data(the cart) corresponding to my operation to-
gether, and unlock them(in fact only one data node) together after the operation.
reads on predicates vs. multigranularity locking
I set add, remove and check out would lock a write lock, getCartItems would
lock a read lock. As the first three functions would change the information of
items store in the cart, but the last one would only read and not make any
change.
I do not think we need to consider the multi-granularity locking. Because all
the APIs of Cart are only related to one cart, and would not affect other cart.
Single-granularity seems already enough.
Performance
As I use a map that will only lock the corresponding cart, the performance
would be much better than lock all the whole date set.
And I believe locking is a better choice than optimistic approach. As our
Cart service would serve multiple clients, there is a high probability some of
clients trying to access a same Cart. As I set the return type of getCartItems to
ImmutableCartItem, the costumer would not view any changes about the Car-
tItem. This satisfied the task required. This set together with the lock strategy
make sure the reliability of my system. Doing validation under such situation
or using other optimistic approaches would be a huge cost for our server and
might affect related clients. In contrast, locking is much easier to implement
and can avoid conflict and save time and system cost to validation and correct.
6 Programming Task - Question3
Before-or-After Atomicity of Supermarket
Method
10
I use two phase Lock strategy for ensuring serializability at Cart instance. I
create a one a map called itemMap, mapping from itemId to its corresponding
ReadWriteLock.
correctness
My method is logically equivalent to a Conservative Strict Two Phase Lock-
ing. Because all the APIs requires a list of a set of itemIds as argument,my
method would lock the corresponding Lock of that items before doing certain
work. After work finish, I unlock them. This satisfy the definition of Conserva-
tive Strict Two Phase Locking, I lock all the data(the items) corresponding to
my operation together, and unlock them together after the operation.
reads on predicates vs. multigranularity locking
I set write lock for updateStocks, read lock for getItems. As the former
would change the stocks of items stored in the supermarket, but the last one
would only read and not make any change.
I think my method already reach the multi-granularity locking. Because
both APIs of Supermarket requires multiple itemIds, using map to lock all the
corresponding items achieves a flexible-granularity locking. And the granularity
is decided by the input.
Performance
Similar to Cart, as I use a map that will only lock the corresponding cart,
the performance would be much better than lock all the whole date set.
I believe locking is a better choice than optimistic approach. As our Super-
market service would serve multiple clients and Cart servers, there would be
a high probability some of clients trying to access some same items. I set the
return type of getItems to ImmutableItem, the costumer would not view any
changes about the Item. This satisfied the task required. I noticed that there
is a property called lastRestockingTimestamp in ImmutableItem. I choose to
set it with System.currentTimeMillis() as a time stamp. This property was not
used in my system as I decide to use locking strategy. But this also gives us
another way to achieve serialization: using time stamps to validate each oper-
ations’ order. But doing such validation would in fact be a huge cost for our
server and might affect related clients. In contrast, locking is much easier to
implement and can avoid conflict and save time and system cost to validation
and correct.
11
7 Programming Task - Question4
Testing
7.1 Basic introduction for Test
I create three test files standing for three kinds of CLients: AdminClientTest,
MobileClientTest and StoreClientTest. I set a bool variable localTest for config
the test to run on local or RPC.
at @BeforeClass I called intiializeCertainCart function of cart to set its be-
longingSupermarket to the supermarket used in tests.
Because there are no ease functions in the interface handed out to us, I ease
the test by using remove of cart and updateStocks of supermarket. Change
them back to the default stage.
7.2 before-or-after atomicity test
I did the before-or-after atomicity test for Cloud Server in AdminClientTest and
write two test for testing: testDirtyWrite() and testIsolution(). For testing the
before-or-after atomicity on Edge Server, I write a test testBeforeAndAfter() in
StoreClientTest.
In testDirtyWrite() I create two thread: addthread and minusthread. The
addthread would keep calling updateStocks for 10000 times, each time add one
stock of ITEM1. The minusthread, on the contrary would keep calling update-
Stocks for 10000 times and each time minus one stock of ITEM1. Enough stock
has been added to the Supermarket at the beginning. These two threads runs
concurrently in this test. And I check whether the number of ITEM1 after these
two thread equals to its value at the beginning. Assume if dirty write appears
during these two thousand times calling updateStocks, the end value would be
not equal to the beginning value. So this test can check whether dirty write
would happen in my system.
In testIsolution(), Two clients running in different threads: changethread
and peerthread, continuously invoke operations against the Supermarket in-
terface. The changethread do the same operation for 10000 times: invokes
updateStocks to minus one stock of ITEM1, changethread then invokes update-
Stocks to add one stock of ITEM1. The peerthread continuously calls getItems
of ITEM1 for 10000 times to get its quantity and ensures the snapshot of quan-
tities either same to the beginning stocks or stocks minus one, as if ITEM1 had
been just minus one or just replenished. Assume Before-and-After (Isolution)
has not been implement in my system, peerthread would watch such abnormal
values. So my test can succeed to test dirty reads and dirty writes.
The testBeforeAndAfter() is quite similar to what I do in testIsolution(),
but chahge the action to keep adding and removing ITEM1 from Cart1 for 1000
times. And I create another thread for check it always get the correct snapshot
using getCartItems(). So this test can succeed to test dirty reads and dirty
writes on EdgeServer.
12
7.3 RPC, fail-soft test
I did the RPC and fail-soft tests in MobileClientTest file. In testRPC() I check
whether checkout of EdgeServer would auto cause change on stock of the cor-
responding item in CloudServer. In testFailSoft() I stop the CloudHTTPProxy
then call getCartItems of EdgeServer and making sure the EdgeServer can still
work. Proving my system is fail-soft.
8 Programming Task - Question5
Experiments
workload design
I set runFrequentGetItemsInteraction ro invoke getItem, runFrequentUp-
dateStocksInteraction to invoke updateStocks and mix them each count 50%
(percentCloudtInteraction = 50f in WorkloadConfiguration) in my work load.
The runFrequentGetItemsInteraction would randomly pick an ItemID from a set
of default I set in SupermarketConstants and calling getItem with it. The run-
FrequentUpdateStocksInteraction would call SupermarketGenerator.nextListOfItemDeltas
to generate a list of 50 ItemDeltas then using them to invoking updateStocks
of CloudServer.
The throughput are scale by divide the number of successful interactions
with the latency: throughput = success interraction
lantency
. The success interraction
and latency are gotten from WorkerRunResult
experimental setup
My laptop is Dell Inspiration, RAM = 8GB, CPU is Intel i7,my System is
Win10, and I use Eclipse for this experiment. The hardware configuration I set
to my computer is Balance Mode (Middle Level between High Performance and
Power Saving Mode). The worker threads first run 100 times warm-up runs,
and then run actual measurement runs 500 times. After running measurements,
each worker returns a result consisting of the total number of interactions, the
number of successful interactions run, the respective numbers of customer in-
teractions, and the time taken by this worker. I set the maximum number of
concurrent threads to 20. The value of throughput is calculated by divide the
number of successful interactions with the time taken
Result figure
13
(I used jfreechart-1.0.19 for plotting the result)
discussion of results
In fact every time I run the experiment it could not give results with a
constant trend. The result shown before is one of the best tries match my
expectations. I expect the value of throughput should rise as the number of
concurrent threads increases, but at last it should become steady after reach
some level. Because concurrency should make performance better than seri-
alization, but this improvement is limited, because the speed of our CPU is
limited, it could not hold too many threads run effectively.
9 Reference
• Point 1.2 Using Our lecture slides: lecture-12, Page 31
• Point 3.4.1 Using Our text book: Advanced Computer System (ACS)
DIKU Course Comprendium Block 2, 2015/16, Page 110, CHAPTER
18.4.
• Point 3.4.2 Using Our text book: Advanced Computer System (ACS)
DIKU Course Comprendium Block 2, 2015/16, Page 114, CHAPTER
18.6.2.
14
• Point 7.2 Using idea from the former assignment hand out: Assignment
2, Page 4, Test1 and Test2
• Point 8 Using code from the former assignment hand out: Assignment 3,
file: com.acertainbookstore.client.workloads
15