The FreeRADIUS server $Id: 15bac2a4c627c01d1aa2047687b3418955ac7f00 $
Loading...
Searching...
No Matches
Data Structures | Macros | Functions
atomic_queue.c File Reference

Thread-safe queues. More...

#include <stdint.h>
#include <stdalign.h>
#include <inttypes.h>
#include <freeradius-devel/autoconf.h>
#include <freeradius-devel/io/atomic_queue.h>
#include <freeradius-devel/util/talloc.h>
#include <freeradius-devel/util/math.h>
+ Include dependency graph for atomic_queue.c:

Go to the source code of this file.

Data Structures

struct  fr_atomic_queue_entry_t
 Entry in the queue. More...
 
struct  fr_atomic_queue_s
 Structure to hold the atomic queue. More...
 

Macros

#define acquire(_var)   atomic_load_explicit(&_var, memory_order_acquire)
 
#define atomic_int64_t   _Atomic(int64_t)
 
#define atomic_uint32_t   _Atomic(uint32_t)
 
#define atomic_uint64_t   _Atomic(uint64_t)
 
#define CACHE_LINE_SIZE   64
 
#define cas_decr(_store, _var)   atomic_compare_exchange_strong_explicit(&_store, &_var, _var - 1, memory_order_release, memory_order_relaxed)
 
#define cas_incr(_store, _var)   atomic_compare_exchange_strong_explicit(&_store, &_var, _var + 1, memory_order_release, memory_order_relaxed)
 
#define load(_var)   atomic_load_explicit(&_var, memory_order_relaxed)
 
#define store(_store, _var)   atomic_store_explicit(&_store, _var, memory_order_release)
 

Functions

fr_atomic_queue_tfr_atomic_queue_alloc (TALLOC_CTX *ctx, size_t size)
 Create fixed-size atomic queue.
 
void fr_atomic_queue_debug (FILE *fp, fr_atomic_queue_t *aq)
 Dump an atomic queue.
 
void fr_atomic_queue_free (fr_atomic_queue_t **aq)
 Free an atomic queue if it's not freed by ctx.
 
bool fr_atomic_queue_pop (fr_atomic_queue_t *aq, void **p_data)
 Pop a pointer from the atomic queue.
 
bool fr_atomic_queue_push (fr_atomic_queue_t *aq, void *data)
 Push a pointer into the atomic queue.
 
size_t fr_atomic_queue_size (fr_atomic_queue_t *aq)
 

Detailed Description

Thread-safe queues.

Id
7a4d7837f69247a69624f8b33786b9b1587175c1

This is an implementation of a bounded MPMC ring buffer with per-slot sequence numbers, described by Dmitry Vyukov.

Definition in file atomic_queue.c.


Data Structure Documentation

◆ fr_atomic_queue_entry_t

struct fr_atomic_queue_entry_t

Entry in the queue.

Note
This structure is cache line aligned for modern AMD/Intel CPUs. This is to avoid contention when the producer and consumer are executing on different CPU cores.

Definition at line 62 of file atomic_queue.c.

Data Fields
void * data
atomic_int64_t seq Must be seq then data to ensure seq is 64bit aligned for 32bit address spaces.

◆ fr_atomic_queue_s

struct fr_atomic_queue_s

Structure to hold the atomic queue.

Note
DO NOT redorder these fields without understanding how alignas works and maintaining separation. The head and tail must be in different cache lines to reduce contention between producers and consumers. Cold data (size, chunk) can share a line, but must be separated from head and tail and entry.

Definition at line 76 of file atomic_queue.c.

+ Collaboration diagram for fr_atomic_queue_s:
Data Fields
void * chunk The start of the talloc chunk to pass to free.

We need to play tricks to get aligned memory with talloc.

fr_atomic_queue_entry_t entry[] The entry array, also aligned to ensure it's not in the same cache line as tail and size.
atomic_int64_t head Position of the producer.

Cache aligned bytes to ensure it's in a different cache line to tail to reduce memory contention.

size_t size The length of the queue.

This is static. Also needs to be cache aligned, otherwise it can end up directly after tail in memory and share a cache line.

atomic_int64_t tail Position of the consumer.

Cache aligned bytes to ensure it's in a different cache line to tail to reduce memory contention. Reads may still need to occur from size whilst the producer is writing to tail.

Macro Definition Documentation

◆ acquire

#define acquire (   _var)    atomic_load_explicit(&_var, memory_order_acquire)

Definition at line 51 of file atomic_queue.c.

◆ atomic_int64_t

#define atomic_int64_t   _Atomic(int64_t)

Definition at line 44 of file atomic_queue.c.

◆ atomic_uint32_t

#define atomic_uint32_t   _Atomic(uint32_t)

Definition at line 45 of file atomic_queue.c.

◆ atomic_uint64_t

#define atomic_uint64_t   _Atomic(uint64_t)

Definition at line 46 of file atomic_queue.c.

◆ CACHE_LINE_SIZE

#define CACHE_LINE_SIZE   64

Definition at line 54 of file atomic_queue.c.

◆ cas_decr

#define cas_decr (   _store,
  _var 
)    atomic_compare_exchange_strong_explicit(&_store, &_var, _var - 1, memory_order_release, memory_order_relaxed)

Definition at line 49 of file atomic_queue.c.

◆ cas_incr

#define cas_incr (   _store,
  _var 
)    atomic_compare_exchange_strong_explicit(&_store, &_var, _var + 1, memory_order_release, memory_order_relaxed)

Definition at line 48 of file atomic_queue.c.

◆ load

#define load (   _var)    atomic_load_explicit(&_var, memory_order_relaxed)

Definition at line 50 of file atomic_queue.c.

◆ store

#define store (   _store,
  _var 
)    atomic_store_explicit(&_store, _var, memory_order_release)

Definition at line 52 of file atomic_queue.c.

Function Documentation

◆ fr_atomic_queue_alloc()

fr_atomic_queue_t * fr_atomic_queue_alloc ( TALLOC_CTX *  ctx,
size_t  size 
)

Create fixed-size atomic queue.

Note
the queue must be freed explicitly by the ctx being freed, or by using the fr_atomic_queue_free function.
Parameters
[in]ctxThe talloc ctx to allocate the queue in.
[in]sizeThe number of entries in the queue.
Returns
  • NULL on error.
  • fr_atomic_queue_t *, a pointer to the allocated and initialized queue.

Definition at line 114 of file atomic_queue.c.

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ fr_atomic_queue_debug()

void fr_atomic_queue_debug ( FILE *  fp,
fr_atomic_queue_t aq 
)

Dump an atomic queue.

Absolutely NOT thread-safe.

Parameters
[in]aqThe atomic queue to debug.
[in]fpwhere the debugging information will be printed.

Definition at line 408 of file atomic_queue.c.

+ Here is the caller graph for this function:

◆ fr_atomic_queue_free()

void fr_atomic_queue_free ( fr_atomic_queue_t **  aq)

Free an atomic queue if it's not freed by ctx.

This function is needed because the atomic queue memory must be cache line aligned.

Definition at line 171 of file atomic_queue.c.

+ Here is the call graph for this function:

◆ fr_atomic_queue_pop()

bool fr_atomic_queue_pop ( fr_atomic_queue_t aq,
void **  p_data 
)

Pop a pointer from the atomic queue.

Parameters
[in]aqthe atomic queue to retrieve data from.
[out]p_datawhere to write the data.
Returns
  • true on successful pop
  • false on queue empty

Definition at line 310 of file atomic_queue.c.

+ Here is the caller graph for this function:

◆ fr_atomic_queue_push()

bool fr_atomic_queue_push ( fr_atomic_queue_t aq,
void *  data 
)

Push a pointer into the atomic queue.

Parameters
[in]aqThe atomic queue to add data to.
[in]datato push.
Returns
  • true on successful push
  • false on queue full

Definition at line 187 of file atomic_queue.c.

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ fr_atomic_queue_size()

size_t fr_atomic_queue_size ( fr_atomic_queue_t aq)

Definition at line 372 of file atomic_queue.c.

+ Here is the caller graph for this function: