Testing inter-process semaphore code

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Testing inter-process semaphore code

Stéphane Letz
Hi Fons,

I'm currently testing (on OSX) the inter-process semaphore code  
example (sematest ) you published some time ago.

It appears that the method you are using, that is allocating a shared  
memory segment and using the "sem_init" API to allocate the  
semaphore, does not work because "sem_init" function is not  
implemented on OSX.

But switching to the "sem_open" API that allows to create a named  
semaphore seems to work perfectly, with performances that are similar  
to the platform dependent (and complex..) code that I am currently  
using in jackdmp version (that is mach low-level semaphore that have  
to be shared between processes)
(I did not yet tested this new code on Linux...)

Thus I guess the *same* code could be then used for jackdmp on OSX  
and Linux for inter-client synchronization purpose. It there any  
special reason I don't see to use the sem_init/shared memory segment  
method in favor of the more easy to use sem_open" version?

Thanks

Stephane


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Jackit-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jackit-devel
Reply | Threaded
Open this post in threaded view
|

Re: Testing inter-process semaphore code

Jussi Laako
On Thu, 2005-08-11 at 18:13 +0200, Stéphane Letz wrote:

> But switching to the "sem_open" API that allows to create a named  

> Thus I guess the *same* code could be then used for jackdmp on OSX  
> and Linux for inter-client synchronization purpose. It there any  

sem_open() and shm_open() work OK at least on recent (2.6.x) Linux
kernels and recent (2.3.x) glibc with NPTL. LinuxThreads implementations
had some serious problems (or non-existing support) with POSIX
conformance.

IMO, sem_open()/shm_open() is the cleanest way to do things. SysV API is
ugly.


--
Jussi Laako <[hidden email]>



-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Jackit-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jackit-devel
Reply | Threaded
Open this post in threaded view
|

Re: Testing inter-process semaphore code

Jack O'Quin-2
Jussi Laako <[hidden email]> writes:

> sem_open() and shm_open() work OK at least on recent (2.6.x) Linux
> kernels and recent (2.3.x) glibc with NPTL. LinuxThreads implementations
> had some serious problems (or non-existing support) with POSIX
> conformance.
>
> IMO, sem_open()/shm_open() is the cleanest way to do things. SysV API is
> ugly.

All true.  But, we still need to support Linux 2.4 systems and POSIX
shm does not work reliably for that.
--
  joq


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Jackit-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jackit-devel
Reply | Threaded
Open this post in threaded view
|

Re: Testing inter-process semaphore code

Fons Adriaensen
In reply to this post by Stéphane Letz
Hi Stephane,

> It appears that the method you are using, that is allocating a shared  
> memory segment and using the "sem_init" API to allocate the  
> semaphore, does not work because "sem_init" function is not  
> implemented on OSX.

sem_init() does not allocate a sema, it only initialises it. In the
same way sem_destroy does not deallocate, only 'destroy' the sema,
whatever that means. AFAIK, all sem_init() does is initialise the data
in the sem struct pointed to, and there are no side effects - the system
does not maintain a 'list of all semas' or something similar.

> But switching to the "sem_open" API that allows to create a named  
> semaphore seems to work perfectly, with performances that are similar  
> to the platform dependent (and complex..) code that I am currently  
> using in jackdmp version (that is mach low-level semaphore that have  
> to be shared between processes)
> (I did not yet tested this new code on Linux...)
>
> Thus I guess the *same* code could be then used for jackdmp on OSX  
> and Linux for inter-client synchronization purpose. It there any  
> special reason I don't see to use the sem_init/shared memory segment  
> method in favor of the more easy to use sem_open" version?

Yes there is, and it is quite fundamental I think. I'm sure that performance
of named semas will be the same - there is no reason why it should be
inferior. But using named semas does not solve the fundamental problem there
is with the pipes used in the current JACK implentation. This is not that
analysing a new process graph is complex - it could be delegated to a lower
priority task - but that the RT client threads have to perform non-RT-safe
operations each time the process graph, and consequently the 'trigger chain',
changes.

I'll try to explain this for the simple single processor case, and let you
extrapolate it to the SMP case, as I do not know how exactly you implement
the required logic there.

Each client 'i' (and also JACK's engine thread) needs two semaphores, one it
waits on, W [i], and one it will signal when it is ready, S [i]. Of course
W and S are the same set, so in total we have N+1 semas for N clients.
If the semas exists in memory that is shared by jackd and all clients, we
could put W [i] and a pointer to S [i] in a per-client shared memory
region. Whenever a new process graph has been analysed and there are no
outstanding (dis)connection requests, we only have to change the S [i]
pointer in each client's region in order to implement the new graph.

Now imagine we are using named semas. Then possibly each client thread
has to close it's current S [i], and open a new one. This is exactly the
same problem we have now with the pipes: the RT threads have to perform
a non-RT-safe system call to the file system where the named objects
live. This makes a transparent process graph change (i.e. without
interrupting the processing) impossible.

I assume that for the SMP case, where trigger conditions will be more
complex, you also have a solution that does not require switching
back to a 'master' thread. If that is the case, the same problem will
exist.

For similar reasons I think that the connect/disconnect API should
be asynchronous: a client posts a request but that is not necessarily
executed when the call returns. Making the client wait either requires
a second thread in each client, or will disrupt the RT processing.
Also nothing is gained by making this synchronous, as any client
that is watching connections still needs to be prepared to handle
asynchronous operations from other clients. Making an exception for
the caller does not simplify the problem, it just complicates it.


--
Fons






-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Jackit-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jackit-devel
Reply | Threaded
Open this post in threaded view
|

Re: Testing inter-process semaphore code

Stéphane Letz

Le 11 août 05 à 22:06, fons adriaensen a écrit :

> Hi Stephane,
>
>
>> It appears that the method you are using, that is allocating a shared
>> memory segment and using the "sem_init" API to allocate the
>> semaphore, does not work because "sem_init" function is not
>> implemented on OSX.
>>
>
> sem_init() does not allocate a sema, it only initialises it. In the
> same way sem_destroy does not deallocate, only 'destroy' the sema,
> whatever that means. AFAIK, all sem_init() does is initialise the data
> in the sem struct pointed to, and there are no side effects - the  
> system
> does not maintain a 'list of all semas' or something similar.
>
>
>> But switching to the "sem_open" API that allows to create a named
>> semaphore seems to work perfectly, with performances that are similar
>> to the platform dependent (and complex..) code that I am currently
>> using in jackdmp version (that is mach low-level semaphore that have
>> to be shared between processes)
>> (I did not yet tested this new code on Linux...)
>>
>> Thus I guess the *same* code could be then used for jackdmp on OSX
>> and Linux for inter-client synchronization purpose. It there any
>> special reason I don't see to use the sem_init/shared memory segment
>> method in favor of the more easy to use sem_open" version?
>>
>
> Yes there is, and it is quite fundamental I think. I'm sure that  
> performance
> of named semas will be the same - there is no reason why it should be
> inferior. But using named semas does not solve the fundamental  
> problem there
> is with the pipes used in the current JACK implentation. This is  
> not that
> analysing a new process graph is complex - it could be delegated to  
> a lower
> priority task - but that the RT client threads have to perform non-
> RT-safe
> operations each time the process graph, and consequently the  
> 'trigger chain',
> changes.
>
> I'll try to explain this for the simple single processor case, and  
> let you
> extrapolate it to the SMP case, as I do not know how exactly you  
> implement
> the required logic there.
>
> Each client 'i' (and also JACK's engine thread) needs two  
> semaphores, one it
> waits on, W [i], and one it will signal when it is ready, S [i]. Of  
> course
> W and S are the same set, so in total we have N+1 semas for N clients.
> If the semas exists in memory that is shared by jackd and all  
> clients, we
> could put W [i] and a pointer to S [i] in a per-client shared memory
> region. Whenever a new process graph has been analysed and there  
> are no
> outstanding (dis)connection requests, we only have to change the S [i]
> pointer in each client's region in order to implement the new graph.
>
> Now imagine we are using named semas. Then possibly each client thread
> has to close it's current S [i], and open a new one. This is  
> exactly the
> same problem we have now with the pipes: the RT threads have to  
> perform
> a non-RT-safe system call to the file system where the named objects
> live. This makes a transparent process graph change (i.e. without
> interrupting the processing) impossible.
>
> I assume that for the SMP case, where trigger conditions will be more
> complex, you also have a solution that does not require switching
> back to a 'master' thread. If that is the case, the same problem will
> exist.

In my SMP implementation, a 2 threads model is used on the client  
side: one RT thread that is never stopped and one non RT for all  
notifications handling.  When a new client appears,  a "AddClient"  
event with the name of the new client is notified to all running  
clients, that will "connect" to the new client semaphore using its  
name. The fact that this is not RT is not a problem (actually its  
probably non RT also in my current low-level mach semaphore based  
implementation...)


>
> For similar reasons I think that the connect/disconnect API should
> be asynchronous: a client posts a request but that is not necessarily
> executed when the call returns. Making the client wait either requires
> a second thread in each client, or will disrupt the RT processing.
> Also nothing is gained by making this synchronous, as any client
> that is watching connections still needs to be prepared to handle
> asynchronous operations from other clients. Making an exception for
> the caller does not simplify the problem, it just complicates it.

In my implementation, a connection call is "asynchonous" by nature  
since the new graph state is effective at the *next*audio cycle,  
where the RT thread atomically switches to the new graph state  
(remember my LAC talk about the lock-free stuff...)

But....after doing this first version, I finally change is to make  
the connection operation appears "synchronous" by having the server  
wait for the next graph state to be effective before continuing.

Imagine a connection manager client that uses a GraphReorder callback  
to update its connection state GUI. We have for example :

Client
======
Connect (A, B)

Server
========
Connect (A, B)
Notify GraphReorder


Client
======
GraphReorder callback : read connection state


If the server Connect (A, B) operation is asynchonous, then  
GraphReorder notification will be sent before the new state is  
effective for the RT thread and the client GraphReorder callbak may  
see the old state instead of the new one....

Stephane





-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Jackit-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jackit-devel