Lecture 21
Review -- what we have learned about shared-memory programming
Chapter 2: basic concepts
Chapter 3: locks and barriers (busy waiting) -- parallel computing
Chapter 4: semaphores -- multithreaded computing
Chapter 5: monitors -- multithreaded computing
Chapter 6: implementations
practice: C plus Pthreads (or other libraries)
SR and semaphores
Java and synchronized methods
Preview -- Distributed Programming (introduction to Part 2)
concurrent program: processes + communication + synchronization
distributed program: processes can be distributed across machines =>
they cannot use shared variables (usually; DSM exception)
process do share communication channels
they access channels by message passing (Chapter 7)
RPC or rendezvous (Chapter 8)
languages: SR (all three), Java (RPC), Ada (rendezvous)
libraries: sockets, MPI, PVM
we'll cover mechanisms and basic examples (Chapters 7 and 8)
then general programming paradigms (Chapter 9)
Message Passing (Section 7.1)
generalizes semaphores, P(), and V()
P1 ---> channel ---> P2
send receive
channel -- unbounded queue of messages
chan name(id1: type1; ...; idN: typeN) [op declarations in SR]
field
(with libraries messages are just streams of bytes, possibly
with self-describing tags to indicate types of fields)
message passing primitives
send name(expr1, ..., exprN)
types and number of fields must match
effect: evaluate the expressions and produce a message M
atomically append M to the end of the named channel
send is nonblocking (asynchronous)
receive name(var1, ..., varN)
again types and number of fields must match
effect: wait for a message on the named channel
atomically remove first message and put the fields
of the message into the variables
receive is blocking (synchronous)
what is atomic? append in send and remove in receive
what is not? global time ordering (examples below)
say how send and receive generalize V() and P()
examples
(1) chan ch(int)
process A: process B:
send ch(1) receive ch(x)
send ch(2) receive ch(y)
x will contain 1 and y will contain 2
order of messages from SAME source is the order of the sends
(2) chan ch1(int), ch2(int)
process A: process B:
send ch1(1) receive ch1(x)
send ch2(2) receive ch1(y)
process C: process D:
send ch1(3) receive ch2(u)
send ch2(4) receive ch2(v)
what is received now? x will get 1 or 3 and y will get 3 or 1
u will get 2 or 4 and v will get 4 or 2
Process Interaction Patterns
filters -- one way
client/server -- two way as master/slave
interacting peers -- two way as equals
Filter Example -- producer/consumer pipeline
sed --> eqn --> groff [pipeline for printing the text]
shared: buffers between pairs of processes
programmed as monitors with deposit and fetch
distributed: buffers are channels (and hence are "free")
programmed using send (deposit) and receive (fetch)
Implementation Sketch
with shared memory channel kernel space
send receive user space
with distributed memory
kernel 1 copy of message --> channel kernel 2
user 1 send receive user 2
mention network interface daemons and copies of messages
details on these implementations are in Chapter 10
Clients and Servers
client --> (request) server
<-- (reply)
two-way interaction pattern, from client to server then back
(a) with procedures
client does call(args) server is procedure(formals)
body
end
(b) with message passing
chan request(...), reply(...)
"caller" "server"
send request(args) while(true) { # standard server loop
... receive request(vars)
receive reply(vars) body
send reply(results)
}
(c) with message passing and multiple clients
suppose there are multiple clients (and one server)
what has to change in (b)?
can use the one request channel, but need separate reply channels. why?
show how this is done using an array of reply channels, one per client
note that we also need an extra field in requests for the client id
[some message passing primitives enable a server to determine the
client id; it is still a part of the message, but is then implicit]