Computer Science 111, Spring 2010

Lecture 9: Synchronization Primitives and Deadlock

by Greg Clark

Implementing readc and writec with locks

readc and writec:

"Readers" are defined as threads with read-only access to shared data. "Writers" modify data.


struct rlock {
    lock_t l;
    unsigned int readers;
}

lock_for_reading (struct rlock *p) {
    lock(& p->l);
    p->readers++; // overflow check needed but not implemented here
    unlock(& p->l);
}

If 'l' wasn't used, there would be a race condition if two readers incremented p->readers at the same time, resulting in a variable that only reflects there being only one more reader when there are actually two.


lock_for_writing (struct rlock *p) {
    for (;;) {
        lock(& p->l)
        if (!p->readers)
            return;
        unlock(& p->l);
    }
}

But this implementation of lock_for_writing "spins": it keeps on taking up CPU cycles while waiting for readers to finish.


void unlock_for_reading (struct rlock *p) {
    lock(& p->l);
    p->readers--;
    unlock(& p->l);
}

void unlock_for_writing (struct rlock *p) {
    unlock(& p->l);
}

Blocking Mutexes

To solve the problem of wasted CPU cycles with spinlocks, we introduce blocking mutexes.

Blocking mutex: when you can't lock, you give up the CPU to some other thread.


typedef struct {
    lock_t l;
    bool locked;
    proc_t * blocked_list; // Assumed to be linked via "next" pointers
} bmutex_t

void acquire (bmutex_t *b) {
    lock(& b->l);
    while (b->locked) {
        add self to b->blocked_list;
        unlock(& b->l);
        schedule();
    }
    lock(& b->l); 	// resumes after schedule, so b is unlocked at that point
			// and needs to be locked again
    b->locked = true;
    unlock(& b->l);
}

A more pessimistic implementation (assumes you are going on the blocked list, but corrects that error if it's not the case):


void acquire (bmutex_t *b) {
    for (;;) {
        lock(& b->l);
        p->blocked = true;
        add p to b->blocked_list;
            if (!b->locked)
                break;
        unlock(& b->l);
        schedule;
    }
    p->blocked = false;
    remove p from blocked_list;
}

void release (bmutex_t *b) {
    lock(& b->l);
    for (each process q in b->blocked_list) {
        q->blocked = false;
    b->locked = false;
    unlock(& b->l);
}

Semaphores: like blocking mutexes, but uses an integer to represent the number of resources available. A blocking mutex can be thought of as a binary semaphore--there is only one resource available.

uses two new functions instead of acquire and release:


struct pipe {
    b_mutex b;
    char buf[N];
    size_t r, w;
}

char readc(struct pipe *p) {
    for(;;) {
        acquire(& p->b);
        if (p->w != p->r)
            break;
        release (& p->b);
    }
    char c = p->buf[p->r++];
    release(& p->b);
    return c;
}

But this still uses too much CPU, since we have to wait for p->w != p->r

Condition Variables

wait (condvar_t *c, bmutex_t *b)

notify (condvar_t *c)

broadcast (condvar_t *c)


struct pipe {
    bmutex_t b;
    char buf[N];
    size_t r, w;
    condvar_t noempty;
    condvar_t nofull;
}

char readc (struct pipe *p) {
    acquire (& p->b);
    while (p->w == p->r)
        wait (& p->noempty, & p->b);
    char c = p->buf[p->r++];
    notify (& p->nofull);
    release(& p->b);
}

Deadlock

Consider the situation where some command(s) are piped into cat, whose output is piped into the next command. ....| cat |....

We can use new system call to avoid an extra buffer copy, giving better performance.


copy (struct pipe *in, struct pipe *out, int nbytes) {
    acquire(& in->b);
    acquire(& out->b);
    .
    .
    .
    release(& in->b);
    release(& out->b);
}

But now we have to consider the following situation:

Process 1: copy(p,q,1000)
Process 2: copy(q,p,1000)

If both run at the same time, then they will both wait on each other, creating deadlock.

Deadlock is a race condition that has four components:

We only have to fix one of these to prevent deadlock. Two proposes solutions: