web developer & system programmer

coder . cl

ramblings and thoughts on programming...


what is thread-safe?

published: 26-11-2011 / updated: 26-11-2011
posted in: c, development, java, programming, tips
by Daniel Molina Wegener

What is thread-safe?. Thread-safe implies that for a given resource it will not be used at the same time by two or more threads or processes. By resource, we must understand a memory block or similar kind of resources. If you create a list that is shared between threads to store data, you will need a way to write that list — appending nodes or removing nodes — without writing it simultaneously. This implies the usage of thread-safety techniques that will bring you sequential access to the list allowing sequential writes, instead concurrent writes that will probably crash your application.

There are various kinds of techniques that can be used to create thread-safe algorithms and data structures. The are two main classes called blocking algorithms and non-blocking algorithms. On a blocking algorithm, locks are used to encapsulate fragments of code where you must not do concurrent writes to avoid racing conditions, where that fragment is called critical section. On non-blocking algorithms, you do not use locks, instead you use atomic operations to switch between the used resources, for example using a technique called compare and swap or CAS.

A racing condition, is an event that occurs when two or more threads are accessing the same resource at the same time, leading to data corruption or system faults. The most classic lock techniques in Java and C, are the synchronized keyword on Java and pthread_mutex_lock() call in C. On Java we protect our critical section using synchronized as follows.

class SingletonSampleImpl extends Object {  
  
    private static Object singletonMutex = new Object();  
    private static SingletonSampleImpl instance = null;  
  
    public static SingletonSampleImpl getInstance() {  
        synchronized (singletonMutex) {
            /* when we instantiate the singleton
               we have a critical section to protect
               from racing conditions */
            if (instance == null) {  
                instance = new SingletonSampleImpl();  
            }  
        }  
        return instance;  
    }  
}

The singletonMutex object acts as mutex, where a mutex — term that comes from mutual exclusion — has a flag that indicates to the other threads if it is blocked or not to jump into the critical section and execute the code. So, a mutex or semaphore is a mere flag that indicates to the program if the resource is busy or not. Using locks or blocking-algorithms is expensive on most systems, depending on how are they used. For example is very expensive to use a blocking algorithm to protect a socket, where it must wait its chance to be written by other threads, with inherent network time wait.

On the non-blocking side, we have three types of algorithms: wait-free, lock-free and obstruction-free. Where wait free is the strongest technique, but the hardest to handle, because once you implement it you have an algorithm that do not waits for writing or reading. Lock-free usually uses CAS, so it using a compare and swap operation to ensure that the critical section is protected, without locking. Obstruction-free uses locking, but a different one called live-locking.

The atomic operations used on the non-blocking techniques usually are operating on a very low level of the processor, because they must executed without the interference of any other threads. For example to work with atomic operations using the GCC compiler we have some special extensions to work with them.


No coments yet.

post a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>