Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How does inter process communication take place inside operating system --Learning from OS



To achieve concurrency there is need of cooperation among processes . Cooperation among process may be achieved by communication between them. The nature of communication may be of following two kind . The OS needs to implement synchronisation and communication mechanisms to eliminate race conditions. (memory, files). Need to prevent race conditions.

Asynchronous vs. synchronous interaction:
===========================================================

By synchronous, it mean that two or more concurrent threads of control must meet at a single point in time. This generally means that one thread of control must wait for another to respond to a request. The simplest and most common form of synchronous interaction occurs when concurrent activity A requires information from concurrent activity B in order to proceed with A’s own work. Ordinary procedure calls are a prime example of a synchronous interaction: when one procedure calls another, the caller instantaneously transfers control to the called procedure and effectively “waits” for control to be transferred back to it. In the concurrent world, however, additional apparatus is needed to synchronize otherwise independent threads of control.

Asynchronous interactions do not require a rendezvous in time, but still require some additional apparatus to support the communication between two threads of control.

Implementation of concurrency control mechanism :
===========================================================

Unix SVR4 provides several mechanisms that processes can use for synchronization or IPC.Synchronization can be achieved by sharing data, which is implemented by using concept like semaphore ,conditional critical region and monitors . While synchronization can also be achieved by message passing (MP).which includes concept like explicit communication ,mailbox etc.

1. Pipes
===========================

a. A pipe is a circular buffer of fixed size (typical size: 4096 bytes), connecting two processes. Traditionally, the pipe is implemented as an ordinary file. One process writes to the buffer, the other reads from it.
b. The OS provides mutual exclusion and synchronization so if the buffer is full, the writing process must wait; if the buffer is empty, the reading process must wait.
c. Traditionally, pipes were one-way, so for two-way communication you must have two pipes between the processes; also, pipes could only connect related processes (siblings, parent/child). Named pipes can overcome latter restriction.
d. FIFO are named pipes that can connect unrelated processes; they are found in some modern versions of UNIX. These pipes are also bi-directional.
At the command line to "pipe" output from one program to another program as input. ( examples are ls | file.txt ,cat mithdeep.txt |file1.txt etc)

2. Messages
=============================

It is an example of generic mailbox.

a. In UNIX, message queues let processes send formatted data streams to other processes - there need be no relation between the processes.
b. A message queue is represented as a linked list. Senders attach messages to the end, receivers detach messages from the head. A specific message can only be received once.

Related system calls are following:
===============================================

i. msgget, which returns (and possibly creates) a message descriptor. The message descriptor identifies a message queue that will be used for future messages. Messages are stored in a linked queue; one queue per descriptor.

ii. msgsnd, used to send a message. The send call must specify which queue the message goes to, a pointer to a structure which contains the message, and other information.

iii. msgrcv is used to receive a message it must be specified queue where message will be, pointer to structure where message will be stored, etc.

c. There is an upper limit to the number of messages a message queue can hold and the default method for handling an attempt to send to a full queue is to block the sender. Trying to read from an empty queue will block the receiver.

3. Shared Memory
===================================

It is an example of sharing of data in the process of achieving concurrency.

a. Processes can share the same segment of memory directly when it is mapped into the address spaces of each sharing process. Communication is fast since no data movement is required.

b. Individual processes are responsible for providing synchronization and mutual exclusion. In general, this is done by using semaphores, or some other synchronization primitive provided by the operating system.

c. It is less structured ,more flexible than either pipe or message.

Locking mechanism (Data base approach)
========================================================

Locking synchronizes users' access to the database to ensure consistent data and correct query results.

Pessimistic Locking:
====================

Pessimistic locking is the technique by which the data to be updated is locked in advance. If anyone else attempts to acquire the same data during the process, they will be forced to wait until the first transaction has completed.
This approach is called pessimistic because it assumes that another transaction might change the data between the read and the update. In order to prevent that change from happening - and the data inconsistency that would result , the invoking locking statement ,it locks the data to prevent any other transaction from changing it. It leads to two problems “lockout “(prolonged locking ) and “dead lock “.

Optimistic locking:
==================

Optimistic locking offers a well-designed solution to the problems outlined above. Optimistic locking does not lock records when they are read, and proceeds on the assumption that the data being updated has not changed since the read. The Oracle database uses optimistic locking by default. In practice there are a number of different way of achieving concurrency by this, but the most common is the use of a modification time-stamp. Problems with this approach are optimistic locking slows down updates and further it does not notify to user before update .

Dynamic locking:
===================

Dynamic Locking is a new kind of locking strategy. Rather than require the database administrator to analyze and anticipate which level of locking is best, Dynamic Locking uses the intelligence of the database engine(built in ) to optimize the granularity of locks( on row, page, pages or tables) depending on the needs of the applications accessing a SQL Server database. Users of the database benefit from increased performance and it ignores other’s (user created) locks.






This work is licensed under a Creative Commons Attribution 3.0 Unported License,Author,2009.


This post first appeared on Megatrendz, please read the originial post: here

Share the post

How does inter process communication take place inside operating system --Learning from OS

×

Subscribe to Megatrendz

Get updates delivered right to your inbox!

Thank you for your subscription

×