PulseAudio and Jack : some comments

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

PulseAudio and Jack : some comments

Stéphane Letz
Hi Lennart,

I've just read your "PulseAudio and Jack" comparison. You says ;  "can we marry the two approaches? Yes, we probably can, MacOS has a unified approach for both uses."

The point (that was somewhat already raised by Paul in http://0pointer.de/blog/projects/pulse-glitch-free.html post) is that OSX does something similar to glitch-free model in user-space and kernel mode, but not using a daemon. My understanding is that part is done in kernel mode (basically the "mixing" stuff that uses floating point computation... considered evil on Linux yes?) and part is done in user-space, probably the "compute the next wake-up date" part and so on. The first nice result is that any CoreAudio application can use its own buffer-size *and* CoreAudio layer cares of sharing the card access and doing mixing. The second nice thing is that JACK server can run on top of that, running the applications graph, doing its own mixing, then outputting the result to CoreAudio layer. The third nice result is that obviously the whole JACK applications graph *and* regular (non JACKIFIED) CoreAudio applications can run side by side. OSX does not have a way to "rewind" the audio pipeline as you want to have in PA for those " 2 second buffer" applications case.

So basically the OSX model is layered a different way, and then by construction, low latency and higher latencies applications (not the 2 second case I agree...) can (possibly) run together. This is the way I would see a possible "marry the two approaches" possible future. But if by design of PA has to stay a daemon and if the "pulse-glitch-free" part of it cannot be moved in a lower layer (like ALSA), then yes the "we should put the focus on cooperation instead of amalgamation:"  is the only reasonable solution.

Stéphane
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

salsaman-3
On Fri, May 7, 2010 14:16, Stéphane Letz wrote:
> Hi Lennart,
>
> I've just read your "PulseAudio and Jack" comparison. You says ;  "can we
> marry the two approaches? Yes, we probably can, MacOS has a unified
> approach for both uses."
>

Where can I read this ?






> The point (that was somewhat already raised by Paul in
> http://0pointer.de/blog/projects/pulse-glitch-free.html post) is that OSX
> does something similar to glitch-free model in user-space and kernel mode,


Interesting. I just read that, it says:

We provide "zero-latency". Each client can rewrite its playback buffer at
any time, and this is forwarded to the hardware, even if this means that
the sample currently being played needs to be rewritten. This means much
quicker reaction to user input, a more responsive user experience.


How does a client do this (in jack and in pulse) ? It would be very nice
to do this after a seek/reposition in the client code.



> but not using a daemon. My understanding is that part is done in kernel
> mode (basically the "mixing" stuff that uses floating point computation...
> considered evil on Linux yes?) and part is done in user-space, probably


Putting audio mixing in the kernel *is* evil...kernels should be hardware
independant. It seems as ridiculous to me as the windows approach of
putting video drivers in kernel space...yes you might get a slight speed
up, but a misbehaving client can bring the entire kernel down. Besides
that, what if you want some non-standard mixing - e.g. a mixer that does
fft at the same time, or applies some psycho-acoustic model ? It wouldb't
really help you there.

I much prefer the approach of liboil...provide hardware optimised versions
of inner loops, and keep such thing in userspace.


Salsaman
http://lives.sourceforge.net


_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Stéphane Letz
>>
>
> Where can I read this ?
>
>
>

http://0pointer.de/blog/projects/when-pa-and-when-not.html

>
>
>> The point (that was somewhat already raised by Paul in
>> http://0pointer.de/blog/projects/pulse-glitch-free.html post) is that OSX
>> does something similar to glitch-free model in user-space and kernel mode,
>
>
> Interesting. I just read that, it says:
>
> We provide "zero-latency". Each client can rewrite its playback buffer at
> any time, and this is forwarded to the hardware, even if this means that
> the sample currently being played needs to be rewritten. This means much
> quicker reaction to user input, a more responsive user experience.
>
>
> How does a client do this (in jack and in pulse) ? It would be very nice
> to do this after a seek/reposition in the client code.

JACK will not do that. I don"t know exactly how PA does it.

Stéphane
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Lennart Poettering-16
In reply to this post by Stéphane Letz
On Fri, 07.05.10 14:16, Stéphane Letz ([hidden email]) wrote:

> Hi Lennart,

Heya,

> I've just read your "PulseAudio and Jack" comparison. You says ; "can
> we marry the two approaches? Yes, we probably can, MacOS has a unified
> approach for both uses."
>
> The point (that was somewhat already raised by Paul in
> http://0pointer.de/blog/projects/pulse-glitch-free.html post) is that
> OSX does something similar to glitch-free model in user-space and
> kernel mode, but not using a daemon. My understanding is that part is
> done in kernel mode (basically the "mixing" stuff that uses floating
> point computation... considered evil on Linux yes?) and part is done
> in user-space, probably the "compute the next wake-up date" part and
> so on. The first nice result is that any CoreAudio application can use
> its own buffer-size *and* CoreAudio layer cares of sharing the card
> access and doing mixing. The second nice thing is that JACK server can
> run on top of that, running the applications graph, doing its own
> mixing, then outputting the result to CoreAudio layer. The third nice
> result is that obviously the whole JACK applications graph *and*
> regular (non JACKIFIED) CoreAudio applications can run side by
> side. OSX does not have a way to "rewind" the audio pipeline as you
> want to have in PA for those " 2 second buffer" applications case.

Yes, things are layered different on MacOS.

Well, the one thing I am always wondering about is whether having the
virtualized time is really always a good idea for Jack, since sound card
IRQs happen to be accurate and the virtualized time is not (and probably
never will be as accurate, simply because we cannot read accurate timing
information from many cards we have to deal with). So one could turn
around all of this and say that Jack is well advised using the sound
card IRQs, and should not try to play games with virtualized time. But
that's something you need to figure out.  (But note that this was the
biggest thing we burned our fingers with in PA: the instability of the
timers caused us to run into drop-outs we calculated would never happen
to us)

> So basically the OSX model is layered a different way, and then by
> construction, low latency and higher latencies applications (not the 2
> second case I agree...) can (possibly) run together. This is the way I
> would see a possible "marry the two approaches" possible future. But
> if by design of PA has to stay a daemon and if the "pulse-glitch-free"
> part of it cannot be moved in a lower layer (like ALSA), then yes the
> "we should put the focus on cooperation instead of amalgamation:" is
> the only reasonable solution.

I am not really sure that I agree that we should move to a more
MacOS-like design in this respect. There is nothing bad with the current
ALSA design of leaving mixing and all the complex parts of audio
scheduling to userspace. The only drawback of the current design is
indeed that it does not allow two sound servers to run in parallel. But
I see no burning reason to allow that even, since we can simply
cooperatively replace the sound server on the low-level device like I
suggested.

And finally, knowing how complex all that can become if you need to deal
with 2s buffers, with resampling/reformatting, and zero-copy design then
moving that even partially into the kernel is a huge amount of work. A
design that doesn't lose any of these three features and lives in the
kernel would be a very complex design.

To summarize my opinion: I would not be against doing this, and would be
happy to move PA into this direction, but I see neither the immediate
pressure to do this, nor do I see anyone stepping up to do the massive
kernel work necessary here. Taking a smaller bite out of the apple,
i.e. just moving the timer interpolation into the kernel appears more
realistic to me, simply because the RT/profiling people might do this
for us ;-)

Lennart

--
Lennart Poettering                        Red Hat, Inc.
lennart [at] poettering [dot] net
http://0pointer.net/lennart/           GnuPG 0x1A015CC4
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Lennart Poettering-16
In reply to this post by salsaman-3
On Fri, 07.05.10 14:58, [hidden email] ([hidden email]) wrote:

> How does a client do this (in jack and in pulse) ? It would be very nice
> to do this after a seek/reposition in the client code.

On every write() call you pass an index. If you pass the current
hardware playback index as index then you can achieve "zero-latency",
i.e. change the very next sample that is being played back. (Don't
assume this is really zero latency though, since it is not realistic to
always stay one sample in front of the playback index. Which is why I
always put this into "quotation marks").

> Putting audio mixing in the kernel *is* evil...kernels should be hardware
> independant. It seems as ridiculous to me as the windows approach of
> putting video drivers in kernel space...yes you might get a slight speed
> up, but a misbehaving client can bring the entire kernel down. Besides
> that, what if you want some non-standard mixing - e.g. a mixer that does
> fft at the same time, or applies some psycho-acoustic model ? It wouldb't
> really help you there.

Well, we actually have been moving big parts of the X drivers into the
kernel. While that usually covers parts like graphics scheduling and
memory management and not actually any operations it is admittedly a
bit different than the request for putting audio mixing in the
kernel though.

But yepp, the point you raise about non-standard mixing is a valid
one. An often requested feature for PA is that we apply DRC on the
mixing result to avoid clipping. I is not realistic to do all of that in
the kernel.

> I much prefer the approach of liboil...provide hardware optimised versions
> of inner loops, and keep such thing in userspace.

liboil is dead.

Lennart

--
Lennart Poettering                        Red Hat, Inc.
lennart [at] poettering [dot] net
http://0pointer.net/lennart/           GnuPG 0x1A015CC4
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

salsaman-3
On Fri, May 7, 2010 15:33, Lennart Poettering wrote:

> On Fri, 07.05.10 14:58, [hidden email] ([hidden email]) wrote:
>
>> How does a client do this (in jack and in pulse) ? It would be very nice
>> to do this after a seek/reposition in the client code.
>
> On every write() call you pass an index. If you pass the current
> hardware playback index as index then you can achieve "zero-latency",
> i.e. change the very next sample that is being played back. (Don't
> assume this is really zero latency though, since it is not realistic to
> always stay one sample in front of the playback index. Which is why I
> always put this into "quotation marks").
>

OK, but I meant from the point of view of a pulse audio client application.


>> Putting audio mixing in the kernel *is* evil...kernels should be
>> hardware
>> independant. It seems as ridiculous to me as the windows approach of
>> putting video drivers in kernel space...yes you might get a slight speed
>> up, but a misbehaving client can bring the entire kernel down. Besides
>> that, what if you want some non-standard mixing - e.g. a mixer that does
>> fft at the same time, or applies some psycho-acoustic model ? It
>> wouldb't
>> really help you there.
>
> Well, we actually have been moving big parts of the X drivers into the
> kernel. While that usually covers parts like graphics scheduling and
> memory management and not actually any operations it is admittedly a
> bit different than the request for putting audio mixing in the
> kernel though.
>
> But yepp, the point you raise about non-standard mixing is a valid
> one. An often requested feature for PA is that we apply DRC on the
> mixing result to avoid clipping. I is not realistic to do all of that in
> the kernel.
>
>> I much prefer the approach of liboil...provide hardware optimised
>> versions
>> of inner loops, and keep such thing in userspace.
>
> liboil is dead.
>

Why do you say that ? A new version was released not so long ago, and
there is a new project to create a scripting language using liboil (I
forget the name now).

Salsaman.
http://lives.sourceforge.net

_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Stéphane Letz
In reply to this post by Lennart Poettering-16

Le 7 mai 2010 à 15:33, Lennart Poettering a écrit :

> On Fri, 07.05.10 14:58, [hidden email] ([hidden email]) wrote:
>
>> How does a client do this (in jack and in pulse) ? It would be very nice
>> to do this after a seek/reposition in the client code.
>
> On every write() call you pass an index. If you pass the current
> hardware playback index as index then you can achieve "zero-latency",
> i.e. change the very next sample that is being played back. (Don't
> assume this is really zero latency though, since it is not realistic to
> always stay one sample in front of the playback index. Which is why I
> always put this into "quotation marks").
>
>> Putting audio mixing in the kernel *is* evil...kernels should be hardware
>> independant. It seems as ridiculous to me as the windows approach of
>> putting video drivers in kernel space...yes you might get a slight speed
>> up, but a misbehaving client can bring the entire kernel down. Besides
>> that, what if you want some non-standard mixing - e.g. a mixer that does
>> fft at the same time, or applies some psycho-acoustic model ? It wouldb't
>> really help you there.
>
> Well, we actually have been moving big parts of the X drivers into the
> kernel. While that usually covers parts like graphics scheduling and
> memory management and not actually any operations it is admittedly a
> bit different than the request for putting audio mixing in the
> kernel though.
>
> But yepp, the point you raise about non-standard mixing is a valid
> one. An often requested feature for PA is that we apply DRC on the
> mixing result to avoid clipping. I is not realistic to do all of that in
> the kernel.
>
>> I much prefer the approach of liboil...provide hardware optimised versions
>> of inner loops, and keep such thing in userspace.
>
> liboil is dead.
>
> Lennart


http://developer.apple.com/mac/library/documentation/DeviceDrivers/Conceptual/WritingAudioDrivers/ImplementDriver/ImplementDriver.html#//apple_ref/doc/uid/TP30000732-DontLinkElementID_14

Checking again OSX audio driver description, they definitively do "some" processing in kernel mode (see the "clipOutputSamples" description for instance...) Whether if this is good or not for Linux is another story. I was just pointing that changing the audio stack layering a bit would possibly help.

Stéphane
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Adrian Knoth
In reply to this post by Lennart Poettering-16
On Fri, May 07, 2010 at 03:23:50PM +0200, Lennart Poettering wrote:

> Heya,

Hi!

[PA and jack]


JFTR, I once hacked a prototype of the "jack on top of PA" idea. It was
just a quick&dirty thing intended for not-so-low-latency mixing with
ardour (in post production, latency normally isn't important).

   http://adi.loris.tv/jack-on-pulse.c


The approach was to run jackd -d dummy and then start jack-on-pulse to
provide a jack client that outputs to PA.

The original idea was jackd -d pulseaudio, but I was too busy to
implement it as a jackd backend. ;)

I'm not sure if it's a good idea to have two time domains (dummy jack's
notion of time and PA's notion of time). I don't know how low PA's
latency settings can be and if such an approach would make sense at all.

Besides latencies, jack-on-pulse would probably want to read/write in
the card's native sample format and avoid any further conversion to
allow for unaltered signals in a studio setup.


Just my 0.02EUR

--
mail: [hidden email]   http://adi.thur.de        PGP/GPG: key via keyserver
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

salsaman-3
On Fri, May 7, 2010 16:08, Adrian Knoth wrote:

> On Fri, May 07, 2010 at 03:23:50PM +0200, Lennart Poettering wrote:
>
>> Heya,
>
> Hi!
>
> [PA and jack]
>
>
> JFTR, I once hacked a prototype of the "jack on top of PA" idea. It was
> just a quick&dirty thing intended for not-so-low-latency mixing with
> ardour (in post production, latency normally isn't important).
>
>    http://adi.loris.tv/jack-on-pulse.c
>
>
> The approach was to run jackd -d dummy and then start jack-on-pulse to
> provide a jack client that outputs to PA.
>
> The original idea was jackd -d pulseaudio, but I was too busy to
> implement it as a jackd backend. ;)
>
> I'm not sure if it's a good idea to have two time domains (dummy jack's
> notion of time and PA's notion of time). I don't know how low PA's
> latency settings can be and if such an approach would make sense at all.
>
> Besides latencies, jack-on-pulse would probably want to read/write in
> the card's native sample format and avoid any further conversion to
> allow for unaltered signals in a studio setup.
>
>
> Just my 0.02EUR
>
> --
> mail: [hidden email]   http://adi.thur.de        PGP/GPG: key via keyserver
> _______________________________________________
> Jack-Devel mailing list
> [hidden email]
> http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
>


Seems very cool...but I am just wondering....out of interest, wouldn't it
make more sense to do it the other way around, i.e. run pulse on top of
jack ? It would seem like a better idea to keep the latency low as near to
the soundcard as possible, and then implement higher latency layers on top
of this.

Salsaman.
http://lives.sourceforge.net

_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Adrian Knoth
On Fri, May 07, 2010 at 04:37:26PM +0200, [hidden email] wrote:

[jack on pulse]
> Seems very cool...but I am just wondering....out of interest, wouldn't it
> make more sense to do it the other way around, i.e. run pulse on top of
> jack?

We already have this: module-jack-{sink,source} (part of your pulseaudio
installation)


Cheerio

--
mail: [hidden email]   http://adi.thur.de        PGP/GPG: key via keyserver

Selbst ißt der Mann!
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Ed Wildgoose-2
In reply to this post by Stéphane Letz
Possibly a dumb question, but do modern virtualised memory architectures
offer the possibility to have kind of virtual DMA buffers?

The ideal interface for an audio interface seems to be a DMA buffer
where the client is basically free to write to any sample whenever they
feel like and the audio card simply reads the DMA buffer as it needs
it.  This way the client can fill up a huge buffer ahead of time, but
still have the option to change the very next sample before it's played

The difficulty seems to be how to mix multiple DMA buffers down to the
audiocard DMA buffer... So the question is really whether there is any
magic provided by modern architectures such that when you write to some
virtualised location the write can be captured and copied down to the
audiocard DMA buffer?

Alternatively I guess it would be possible to provide a set of
ringbuffer operations that modify the audiocard ringbuffer with implied
mixdown/resampling, etc.

What proportion of audiocards/drivers actually expose a DMA interface
though?

Ed W
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Paul Davis


On Fri, May 7, 2010 at 12:57 PM, Ed W <[hidden email]> wrote:
Possibly a dumb question, but do modern virtualised memory architectures offer the possibility to have kind of virtual DMA buffers?

all modern PCI based audio interfaces work this way.

what they do not share in common is a guarantee over how they *use* the buffer that they share with the host CPU(s).
 
The ideal interface for an audio interface seems to be a DMA buffer where the client is basically free to write to any sample whenever they feel like and the audio card simply reads the DMA buffer as it needs it.  This way the client can fill up a huge buffer ahead of time, but still have the option to change the very next sample before it's played

it doesn't need to be a DMA-based buffer to do this. it can be in user-space and be shipped across "at the last possible moment". obviously, when "the last possible moment" is differs if its a DMA buffer, obviously.

The difficulty seems to be how to mix multiple DMA buffers down to the audiocard DMA buffer... So the question is really whether there is any magic provided by modern architectures such that when you write to some virtualised location the write can be captured and copied down to the audiocard DMA buffer?

no. but this is essentially what dmix does in ALSA. its really very clever, just a bit too clever for its own good, alas.

What proportion of audiocards/drivers actually expose a DMA interface though?

they don't "expose" a DMA interface. they use DMA to access memory owned by the host.

_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Ed Wildgoose-2
On 07/05/2010 18:11, Paul Davis wrote:
 >
 > it doesn't need to be a DMA-based buffer to do this. it can be in
 > user-space and be shipped across "at the last possible moment".
 > obviously, when "the last possible moment" is differs if its a DMA
 > buffer, obviously.

Sure - this was the crux of the question really.  *If* the hardware
could help you monitor changes to the "virtual" DMA buffer(s) that the
clients access, then the sound server can mix the client ring buffers
down to the audiocard DMA buffer as late as possible.

This would appear to require hardware support though, ie some kind of
notification when a chunk of memory is altered?  Does such support exist
in general?

Alternatively wrapping the ringbuffer access in some kind of API sort of
works?

My thought was more that if each client has a very general interface
(like a known size ringbuffer) then mixing down for the audiocard can be
done largely on demand and quite late in the process?  I think the
interesting part of a DMA style interface is that you can *change* the
data already supplied previously - for some applications this is very useful

However, alone it doesn't appear to solve all the problems tackled by
Jack.  You would still appear to be left with a need for high resolution
timers that can wakeup the client at regular intervals

Ed W

_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Lennart Poettering-16
In reply to this post by salsaman-3
On Fri, 07.05.10 15:56, [hidden email] ([hidden email]) wrote:

> On Fri, May 7, 2010 15:33, Lennart Poettering wrote:
> > On Fri, 07.05.10 14:58, [hidden email] ([hidden email]) wrote:
> >
> >> How does a client do this (in jack and in pulse) ? It would be very nice
> >> to do this after a seek/reposition in the client code.
> >
> > On every write() call you pass an index. If you pass the current
> > hardware playback index as index then you can achieve "zero-latency",
> > i.e. change the very next sample that is being played back. (Don't
> > assume this is really zero latency though, since it is not realistic to
> > always stay one sample in front of the playback index. Which is why I
> > always put this into "quotation marks").
> >
>
> OK, but I meant from the point of view of a pulse audio client
> application.

Yes. That was what I was talking of.

> > liboil is dead.
> >
>
> Why do you say that ? A new version was released not so long ago, and
> there is a new project to create a scripting language using liboil (I
> forget the name now).

I said that because it is true. David Schleef is now working on orc as a
replacement for liboil.

http://www.schleef.org/blog/2009/05/31/orc-040/

Lennart

--
Lennart Poettering                        Red Hat, Inc.
lennart [at] poettering [dot] net
http://0pointer.net/lennart/           GnuPG 0x1A015CC4
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Lennart Poettering-16
In reply to this post by Adrian Knoth
On Fri, 07.05.10 16:08, Adrian Knoth ([hidden email]) wrote:

>
> On Fri, May 07, 2010 at 03:23:50PM +0200, Lennart Poettering wrote:
>
> > Heya,
>
> Hi!
>
> [PA and jack]
>
>
> JFTR, I once hacked a prototype of the "jack on top of PA" idea. It was
> just a quick&dirty thing intended for not-so-low-latency mixing with
> ardour (in post production, latency normally isn't important).
>
>    http://adi.loris.tv/jack-on-pulse.c

Well, I don't think that running the low-latency sound server on top of
the low-latency-is-not-the-only-thing-that-matters sound server is
really a good idea. The other way round makes more sense.

Lennart

--
Lennart Poettering                        Red Hat, Inc.
lennart [at] poettering [dot] net
http://0pointer.net/lennart/           GnuPG 0x1A015CC4
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Lennart Poettering-16
In reply to this post by salsaman-3
On Fri, 07.05.10 16:37, [hidden email] ([hidden email]) wrote:

> Seems very cool...but I am just wondering....out of interest, wouldn't it
> make more sense to do it the other way around, i.e. run pulse on top of
> jack ? It would seem like a better idea to keep the latency low as near to
> the soundcard as possible, and then implement higher latency layers on top
> of this.

Yes, and that is what I have repeatedly suggesting with my
cooperation-not-amalgamation post.

Lennart

--
Lennart Poettering                        Red Hat, Inc.
lennart [at] poettering [dot] net
http://0pointer.net/lennart/           GnuPG 0x1A015CC4
_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
Reply | Threaded
Open this post in threaded view
|

Re: PulseAudio and Jack : some comments

Fernando Lopez-Lezcano
On Fri, 2010-05-07 at 20:52 +0200, Lennart Poettering wrote:

> On Fri, 07.05.10 16:37, [hidden email] ([hidden email]) wrote:
>
> > Seems very cool...but I am just wondering....out of interest, wouldn't it
> > make more sense to do it the other way around, i.e. run pulse on top of
> > jack ? It would seem like a better idea to keep the latency low as near to
> > the soundcard as possible, and then implement higher latency layers on top
> > of this.
>
> Yes, and that is what I have repeatedly suggesting with my
> cooperation-not-amalgamation post.

It does work, at least in tests I did a while back. When I wrote a perl
wrapper for jack for fc11 (because the pa version at that time was not
working perfectly in terms of cooperating with jack) I added code to
switch the current pa streams to pa automatically and it seemed to work
fine, playback of pa streams would interrupt for just a moment and then
keep playing using jack.

-- Fernando


_______________________________________________
Jack-Devel mailing list
[hidden email]
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org