I’ve recently designed the IPC Messaging facility for caffeine. I’ve integrated the data packer module with the state machine module, plus the core IPC Messaging routines. The process itself is quite simple, you load a state machine, or build one on runtime, to process IPC messages, and you just define an IPC messaging service, which hold the proper information to allow your applications to work within those facilities. I’m thinking to use a similar approach to build networking support for caffeine. Since the IPC Messaging Facility is thinked to work on top of the Process Pool facility, you need to define statically the proper service structure on your applications, and instead of random IPC keys, you must use a static one for each Messaging service.
Each service is created with the caf_msg_svc_t structure. It holds a message seed of type caf_msg_t. The message seed has the IPC key, and related IPC data, and it is used as template for other messages sent by through this facility. Each session has a u_long identifier, which is processed by the master process in the session_inc member. And both snd_inc and rcv_inc keep tracking the sent and received messages respectively. The machine pointer holds the state machine pointer, and its type is defined by the type structure member. Each error on the process of managing the messaging facility is hold on the errno_v, which allows you to know which error is produced internally on those routines, so you can check for your operating system alerts. Each session is stored in the deque member sessions.
The idea is to process each session — as list that is — through the lstdl_map function, which, which uses the machine structure member by passing processing session item to the packet parser by using the caf_packet_parse_machine() caffeine function, and then hold the parsed packet to next processing functions in the state machine. The only one state machine that can not be converted by caffeine from a normal state machine to a packet processing state machine is the static one, the others two machines, both the plugable and the dynamic one can be converted to packet processing ones. The difference between the state machines is the fact that the static is build on top of arrays, and this means that that each state in it would be made statically on compile time, the plugable one is created on runtime and is using a pointer instead of arrays, and the dynamic one uses a deque as state storage.
Each session has a pointer to its parent service, and it is assigned while it is created. This allows the state machines and service processing routines to setup and identify which packets can be received through the binding IPC keys. The IPC key is bounded to the session_id member, and the structure itself keeps tracking both sent and received packets count through the snd_cnt and rcv_cnt respectively. The snd_id and the rcv_id are both made to setup the sender and receiver identifiers, which both will be used as endpoints identifiers. The service will be kept receiving messages and it will identify its senders through the snd_id identifier in its sessions.
On each created session, you must setup the client pid_t and the server pid_t for further use. Also the message size is setup when the machine is used though the parsing facility. Each time a session is being used, it is locked, through the lock member — yeah, I know, I’m using locking on it, but I’m thinking to change it by CAS operations — and the service member structure is used to hold data about the message template and useful information about the service. Since IPC communications are thinked to share small amounts of data, you can not setup messages which are too wide.
At the other side, the implementation is not completely finished but the initial algorithm is implemented. I will use it as initial algorithm to implement the networking facility implementation. As further works, I will implement ASN.1, XDR and SDXF binary formats to allow standardized data format processing for networking protocols. It will allow more interoperability with other systems. I know that there is a lot of work forward, but I’m going fine with my works on caffeine. Also, I’m not using text based data formats, such as JSON, YAML or XML, since those formats generate a huge overhead by processing them, and really is so hard to implement thread safe text parsers, and all my work on caffeine is — yet — thread safe. Since its conception caffeine is build with multicore in mind…
The processing model is quiet old, the use of state machines is. We can see them on device drivers, memory management, VFS implementations and a lot of examples more. You must worry on designing the proper packets for communicating your processes and write them through the caffeine facilities and plug those components. The steps for building an IPC messaging implementation through caffeine are as follows:
- Define a packet parser for your implementation.
- Create the service in the master process.
- Copy the service in each session on fork, keep it on the master process.
- Begin processing the service sessions.
Since it is planned as thread safe implementation — yet I must look at the "_r" suffixed POSIX and C99 functions to use — you can keep a thread working on the message processing. A similar model, but using lock-free and similar techniques is going to be implemented on the protocol stack for caffeine. And some other features will be applied on further works on it, always with multicore, distribution, parallelism and lock-free some other nice computing things in mind.
Honestly, I’m quiet anxious to implement the other features and enhance the current ones in caffeine. The main problem is time to develop it and since I have researched enough and good features for it, it keeps me more anxious…