On Wed, May 14, 2014 at 8:46 PM, Jeff Janes wrote:
>> +1. I can't think of many things we might do that would be more
>> important.
>
> Can anyone guess how likely this approach is to make it into 9.5? I've been
> pondering some incremental improvements over what we have now, but if this
> revol
On Wed, May 14, 2014 at 05:46:49PM -0700, Jeff Janes wrote:
> On Monday, January 27, 2014, Robert Haas wrote:
>
> > On Mon, Jan 27, 2014 at 4:16 PM, Simon Riggs
> > >
> > wrote:
> > > On 26 January 2014 12:58, Andres Freund
> > > >
> > wrote:
> > >> On 2014-01-25 20:26:16 -0800, Peter Geoghegan
On Monday, January 27, 2014, Robert Haas wrote:
> On Mon, Jan 27, 2014 at 4:16 PM, Simon Riggs
> >
> wrote:
> > On 26 January 2014 12:58, Andres Freund
> > >
> wrote:
> >> On 2014-01-25 20:26:16 -0800, Peter Geoghegan wrote:
> >>> Shouldn't this patch be in the January commitfest?
> >>
> >> I t
On Mon, Jan 27, 2014 at 4:16 PM, Simon Riggs wrote:
> On 26 January 2014 12:58, Andres Freund wrote:
>> On 2014-01-25 20:26:16 -0800, Peter Geoghegan wrote:
>>> Shouldn't this patch be in the January commitfest?
>>
>> I think we previously concluded that there wasn't much chance to get
>> this in
On 26 January 2014 12:58, Andres Freund wrote:
> On 2014-01-25 20:26:16 -0800, Peter Geoghegan wrote:
>> Shouldn't this patch be in the January commitfest?
>
> I think we previously concluded that there wasn't much chance to get
> this into 9.4 and there's significant work to be done on the patch
On 2014-01-25 20:26:16 -0800, Peter Geoghegan wrote:
> Shouldn't this patch be in the January commitfest?
I think we previously concluded that there wasn't much chance to get
this into 9.4 and there's significant work to be done on the patch
before new reviews are required, so not submitting it im
Shouldn't this patch be in the January commitfest?
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, 2013-09-25 at 12:31 +0300, Heikki Linnakangas wrote:
> On 19.09.2013 16:24, Andres Freund wrote:
> ...
> > There's probably more.
I think _bt_check_unique is also a problem.
> Hmm, some of those are trivial, but others, rewrite_heap_tuple() are
> currently only passed the HeapTuple, with
On 2013-09-25 12:31:20 +0300, Heikki Linnakangas wrote:
> Hmm, some of those are trivial, but others, rewrite_heap_tuple() are
> currently only passed the HeapTuple, with no reference to the page where the
> tuple came from. Instead of changing code to always pass that along with a
> tuple, I think
On Tue, Oct 1, 2013 at 2:13 PM, Andres Freund wrote:
> Agreed. The "wait free LW_SHARED" thing[1] I posted recently had a simple
>
> #define pg_atomic_read(atomic) (*(volatile uint32 *)&(atomic))
>
> That should be sufficient and easily greppable, right?
Looks good enough for me. I would consider
On 2013-10-01 04:47:42 +0300, Ants Aasma wrote:
> I still think we should have a macro for the volatile memory accesses.
> As a rule, each one of those needs a memory barrier, and if we
> consolidate them, or optimize them out, the considerations why this is
> safe should be explained in a comment.
Just found this in my drafts folder. Sorry for the late response.
On Fri, Sep 20, 2013 at 3:32 PM, Robert Haas wrote:
> I am entirely unconvinced that we need this. Some people use acquire
> + release fences, some people use read + write fences, and there are
> all combinations (read-acquire, re
On 25.09.2013 15:48, Peter Eisentraut wrote:
On 9/25/13 5:31 AM, Heikki Linnakangas wrote:
Attached is a new version, which adds that field to HeapTupleData. Most
of the issues on you listed above have been fixed, plus a bunch of other
bugs I found myself. The bug that Jeff ran into with his cou
On 9/25/13 5:31 AM, Heikki Linnakangas wrote:
> Attached is a new version, which adds that field to HeapTupleData. Most
> of the issues on you listed above have been fixed, plus a bunch of other
> bugs I found myself. The bug that Jeff ran into with his count.pl script
> has also been fixed.
This
On 18.09.2013 22:55, Jeff Janes wrote:
On Mon, Sep 16, 2013 at 6:59 AM, Heikki Linnakangas
wrote:
Here's a rebased version of the patch, including the above-mentioned
fixes. Nothing else new.
I've applied this to 0892ecbc015930d, the last commit to which it applies
cleanly.
When I test this by
Just some notes, before I forget them again.
On 2013-09-23 11:50:05 -0400, Robert Haas wrote:
> On Fri, Sep 20, 2013 at 11:11 AM, Andres Freund
> wrote:
> > On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
> >> I think we should go through the various implementations and make sure
> >> they ar
On 2013-09-23 11:50:05 -0400, Robert Haas wrote:
> On Fri, Sep 20, 2013 at 11:11 AM, Andres Freund
> wrote:
> > On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
> >> I think we should go through the various implementations and make sure
> >> they are actual compiler barriers and then change the
On Fri, Sep 20, 2013 at 11:11 AM, Andres Freund wrote:
> On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
>> I think we should go through the various implementations and make sure
>> they are actual compiler barriers and then change the documented policy.
>
> From a quick look
> * S_UNLOCK for P
Hi,
IMO it's a bug if S_UNLOCK is a not a compiler barrier.
Moreover for volatile remember:
https://www.securecoding.cert.org/confluence/display/seccode/DCL17-C.+Beware+of+miscompiled+volatile-qualified+variables
Who is double checking compiler output? :)
regards
Didier
On Fri, Sep 20, 2013
Hi
On Fri, Sep 20, 2013 at 5:11 PM, Andres Freund wrote:
> On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
> > I think we should go through the various implementations and make sure
> > they are actual compiler barriers and then change the documented policy.
>
> From a quick look
> * S_UNLOCK
On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
> I think we should go through the various implementations and make sure
> they are actual compiler barriers and then change the documented policy.
>From a quick look
* S_UNLOCK for PPC isn't a compiler barrier
* S_UNLOCK for MIPS isn't a compiler
Andres Freund escribió:
> Hi,
>
> On 2013-09-19 14:42:19 +0300, Heikki Linnakangas wrote:
> > On 18.09.2013 16:22, Andres Freund wrote:
> > >* Why can we do a GetOldestXmin(allDbs = false) in
> > > BeginXidLSNRangeSwitch()?
> >
> > Looks like a bug. I think I got the arguments backwards, was su
On 2013-09-20 08:54:26 -0400, Robert Haas wrote:
> On Fri, Sep 20, 2013 at 8:40 AM, Andres Freund wrote:
> > On 2013-09-20 08:32:29 -0400, Robert Haas wrote:
> >> Personally, I think the biggest change that would help here is to
> >> mandate that spinlock operations serve as compiler fences. That
On Fri, Sep 20, 2013 at 8:40 AM, Andres Freund wrote:
> On 2013-09-20 08:32:29 -0400, Robert Haas wrote:
>> Personally, I think the biggest change that would help here is to
>> mandate that spinlock operations serve as compiler fences. That would
>> eliminate the need for scads of volatile refere
Hi,
I agree with most of what you said - I think that's a littlebit too much
change for too little benefit.
On 2013-09-20 08:32:29 -0400, Robert Haas wrote:
> Personally, I think the biggest change that would help here is to
> mandate that spinlock operations serve as compiler fences. That would
On Thu, Sep 19, 2013 at 6:27 PM, Ants Aasma wrote:
> I'm tackling similar issues in my patch. What I'm thinking is that we should
> change our existing supposedly atomic accesses to be more explicit and make
> the API compatible with C11 atomics[1]. For now I think the changes should be
> somethin
On Thu, Sep 19, 2013 at 2:42 PM, Heikki Linnakangas
wrote:
>> * switchFinishXmin and nextSwitchXid should probably be either volatile
>>or have a compiler barrier between accessing shared memory and
>>checking them. The compiler very well could optimize them away and
>>access shmem all
On 2013-09-19 14:40:35 +0200, Andres Freund wrote:
> > >* I think heap_lock_tuple() needs to unset all-visible, otherwise we
> > > won't vacuum that page again which can lead to problems since we
> > > don't do full-table vacuums again?
> >
> > It's OK if the page is never vacuumed again, the
Hi,
On 2013-09-19 14:42:19 +0300, Heikki Linnakangas wrote:
> On 18.09.2013 16:22, Andres Freund wrote:
> >* Why can we do a GetOldestXmin(allDbs = false) in
> > BeginXidLSNRangeSwitch()?
>
> Looks like a bug. I think I got the arguments backwards, was supposed to be
> allDbs = true and ignoreV
On 18.09.2013 16:22, Andres Freund wrote:
On 2013-09-16 16:59:28 +0300, Heikki Linnakangas wrote:
Here's a rebased version of the patch, including the above-mentioned fixes.
Nothing else new.
* We need some higherlevel description of the algorithm somewhere in the
source. I don't think I've
On Wed, Sep 18, 2013 at 12:55 PM, Jeff Janes
> wrote:
> On Mon, Sep 16, 2013 at 6:59 AM, Heikki Linnakangas <
> hlinnakan...@vmware.com 'hlinnakan...@vmware.com');>> wrote:
>
>>
>> Here's a rebased version of the patch, including the above-mentioned
>> fixes. Nothing else new.
>
>
> I've applied
On Mon, Sep 16, 2013 at 6:59 AM, Heikki Linnakangas wrote:
>
> Here's a rebased version of the patch, including the above-mentioned
> fixes. Nothing else new.
I've applied this to 0892ecbc015930d, the last commit to which it applies
cleanly.
When I test this by repeatedly incrementing a counte
On 2013-09-16 16:59:28 +0300, Heikki Linnakangas wrote:
> Here's a rebased version of the patch, including the above-mentioned fixes.
> Nothing else new.
* We need some higherlevel description of the algorithm somewhere in the
source. I don't think I've understood the concept from the patch alon
Heikki Linnakangas escribió:
> Here's a rebased version of the patch, including the above-mentioned
> fixes. Nothing else new.
Nice work. I apologize for the conflicts I created yesterday.
I would suggest renaming varsup_internal.h to varsup_xlog.h.
You added a FIXME comment to HeapTupleHeader
On 2013-09-17 09:41:47 -0400, Peter Eisentraut wrote:
> On 9/16/13 9:59 AM, Heikki Linnakangas wrote:
> > Here's a rebased version of the patch, including the above-mentioned
> > fixes. Nothing else new.
>
> It still fails to apply. You might need to rebase it again.
FWIW, I don't think it's too
On 9/16/13 9:59 AM, Heikki Linnakangas wrote:
> Here's a rebased version of the patch, including the above-mentioned
> fixes. Nothing else new.
It still fails to apply. You might need to rebase it again.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to y
On 27.08.2013 19:37, Heikki Linnakangas wrote:
On 27.08.2013 18:56, Heikki Linnakangas wrote:
Here's an updated patch.
Ah, forgot one thing:
Here's a little extension I've been using to test this. It contains two
functions; one to simply consume N xids, making it faster to hit the
creation of
On Mon, Sep 2, 2013 at 3:16 PM, Jeff Davis wrote:
> On Fri, 2013-08-30 at 20:34 +0200, Andres Freund wrote:
>> I have a quick question: The reason I'd asked about the status of the
>> patch was that I was thinking about the state of the "forensic freezing"
>> patch. After a quick look at your prop
On Fri, 2013-08-30 at 20:34 +0200, Andres Freund wrote:
> I have a quick question: The reason I'd asked about the status of the
> patch was that I was thinking about the state of the "forensic freezing"
> patch. After a quick look at your proposal, we still need to freeze in
> some situations (old
Hi Heikki,
On 2013-08-27 18:56:15 +0300, Heikki Linnakangas wrote:
> Here's an updated patch. The race conditions I mentioned above have been
> fixed.
Thanks for posting the new version!
I have a quick question: The reason I'd asked about the status of the
patch was that I was thinking about the
On 27.08.2013 18:56, Heikki Linnakangas wrote:
Here's an updated patch.
Ah, forgot one thing:
Here's a little extension I've been using to test this. It contains two
functions; one to simply consume N xids, making it faster to hit the
creation of new XID ranges and wraparound. The other,
pr
On 10.06.2013 21:58, Heikki Linnakangas wrote:
On 01.06.2013 23:21, Robert Haas wrote:
On Sat, Jun 1, 2013 at 2:48 PM, Heikki Linnakangas
wrote:
We define a new page-level bit, something like PD_RECENTLY_FROZEN.
When this bit is set, it means there are no unfrozen tuples on the
page with XIDs
On Mon, Jun 10, 2013 at 4:48 PM, Simon Riggs wrote:
> Well done, looks like good progress.
+1.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription
On 10 June 2013 19:58, Heikki Linnakangas wrote:
> On 01.06.2013 23:21, Robert Haas wrote:
>>
>> On Sat, Jun 1, 2013 at 2:48 PM, Heikki Linnakangas
>> wrote:
We define a new page-level bit, something like PD_RECENTLY_FROZEN.
When this bit is set, it means there are no unfrozen tup
On 01.06.2013 23:21, Robert Haas wrote:
On Sat, Jun 1, 2013 at 2:48 PM, Heikki Linnakangas
wrote:
We define a new page-level bit, something like PD_RECENTLY_FROZEN.
When this bit is set, it means there are no unfrozen tuples on the
page with XIDs that predate the current half-epoch. Whenever
On 06/07/2013 09:16 PM, Andres Freund wrote:
> On 2013-06-07 20:10:55 +0100, Simon Riggs wrote:
>> On 7 June 2013 19:56, Heikki Linnakangas wrote:
>>> On 07.06.2013 21:33, Simon Riggs wrote:
Now that I consider Greg's line of thought, the idea we focused on
here was about avoiding freezi
On 06/07/2013 08:56 PM, Heikki Linnakangas wrote:
> On 07.06.2013 21:33, Simon Riggs wrote:
>> Now that I consider Greg's line of thought, the idea we focused on
>> here was about avoiding freezing. But Greg makes me think that we may
>> also wish to look at allowing queries to run longer than one
On 07.06.2013 22:15, Robert Haas wrote:
On Fri, Jun 7, 2013 at 3:10 PM, Simon Riggs wrote:
The long running query problem hasn't ever been looked at, it seems,
until here and now.
For what it's worth (and that may not be much), I think most people
will die a horrible death due to bloat after
On 7 June 2013 20:16, Andres Freund wrote:
> On 2013-06-07 20:10:55 +0100, Simon Riggs wrote:
>> On 7 June 2013 19:56, Heikki Linnakangas wrote:
>> > On 07.06.2013 21:33, Simon Riggs wrote:
>> >>
>> >> Now that I consider Greg's line of thought, the idea we focused on
>> >> here was about avoidin
On 2013-06-07 20:10:55 +0100, Simon Riggs wrote:
> On 7 June 2013 19:56, Heikki Linnakangas wrote:
> > On 07.06.2013 21:33, Simon Riggs wrote:
> >>
> >> Now that I consider Greg's line of thought, the idea we focused on
> >> here was about avoiding freezing. But Greg makes me think that we may
> >
On Fri, Jun 7, 2013 at 3:10 PM, Simon Riggs wrote:
> The long running query problem hasn't ever been looked at, it seems,
> until here and now.
For what it's worth (and that may not be much), I think most people
will die a horrible death due to bloat after holding a transaction
open for a tiny fr
On 7 June 2013 19:56, Heikki Linnakangas wrote:
> On 07.06.2013 21:33, Simon Riggs wrote:
>>
>> Now that I consider Greg's line of thought, the idea we focused on
>> here was about avoiding freezing. But Greg makes me think that we may
>> also wish to look at allowing queries to run longer than on
On 07.06.2013 21:33, Simon Riggs wrote:
Now that I consider Greg's line of thought, the idea we focused on
here was about avoiding freezing. But Greg makes me think that we may
also wish to look at allowing queries to run longer than one epoch as
well, if the epoch wrap time is likely to come dow
On 7 June 2013 19:08, Heikki Linnakangas wrote:
> On 07.06.2013 20:54, Robert Haas wrote:
>>
>> On Thu, Jun 6, 2013 at 6:28 PM, Greg Stark wrote:
>>>
>>> On Thu, Jun 6, 2013 at 1:39 PM, Heikki Linnakangas
>>> wrote:
That will keep OldestXmin from advancing. Which will keep vacuum from
On 07.06.2013 20:54, Robert Haas wrote:
On Thu, Jun 6, 2013 at 6:28 PM, Greg Stark wrote:
On Thu, Jun 6, 2013 at 1:39 PM, Heikki Linnakangas
wrote:
That will keep OldestXmin from advancing. Which will keep vacuum from
advancing relfrozenxid/datfrozenxid. Which will first trigger the warnings
On Thu, Jun 6, 2013 at 6:28 PM, Greg Stark wrote:
> On Thu, Jun 6, 2013 at 1:39 PM, Heikki Linnakangas
> wrote:
>> That will keep OldestXmin from advancing. Which will keep vacuum from
>> advancing relfrozenxid/datfrozenxid. Which will first trigger the warnings
>> about wrap-around, then stops n
On Thu, Jun 6, 2013 at 1:39 PM, Heikki Linnakangas
wrote:
> That will keep OldestXmin from advancing. Which will keep vacuum from
> advancing relfrozenxid/datfrozenxid. Which will first trigger the warnings
> about wrap-around, then stops new XIDs from being generated, and finally a
> forced shutd
On 06.06.2013 15:16, Greg Stark wrote:
On Fri, May 31, 2013 at 3:04 AM, Robert Haas wrote:
Even at a more modest 10,000 tps, with default
settings, you'll do anti-wraparound vacuums of the entire cluster
about every 8 hours. That's not fun.
I've forgotten now. What happens if you have a long
On Fri, May 31, 2013 at 3:04 AM, Robert Haas wrote:
> Even at a more modest 10,000 tps, with default
> settings, you'll do anti-wraparound vacuums of the entire cluster
> about every 8 hours. That's not fun.
I've forgotten now. What happens if you have a long-lived transaction
still alive from >
On 1 June 2013 21:26, Robert Haas wrote:
> On Sat, Jun 1, 2013 at 3:22 PM, Simon Riggs wrote:
>> If we set a bit, surely we need to write the page. Isn't that what we
>> were trying to avoid?
>
> No, the bit only gets set in situations when we were going to dirty
> the page for some other reason
On Sat, Jun 1, 2013 at 3:22 PM, Simon Riggs wrote:
> If we set a bit, surely we need to write the page. Isn't that what we
> were trying to avoid?
No, the bit only gets set in situations when we were going to dirty
the page for some other reason anyway. Specifically, if a page
modification disco
On Sat, Jun 1, 2013 at 2:48 PM, Heikki Linnakangas
wrote:
>> We define a new page-level bit, something like PD_RECENTLY_FROZEN.
>> When this bit is set, it means there are no unfrozen tuples on the
>> page with XIDs that predate the current half-epoch. Whenever we know
>> this to be true, we set
On 1 June 2013 19:48, Heikki Linnakangas wrote:
> On 31.05.2013 06:02, Robert Haas wrote:
>>
>> On Thu, May 30, 2013 at 2:39 PM, Robert Haas
>> wrote:
>>>
>>> Random thought: Could you compute the reference XID based on the page
>>> LSN? That would eliminate the storage overhead.
>>
>>
>> After m
On 31.05.2013 06:02, Robert Haas wrote:
On Thu, May 30, 2013 at 2:39 PM, Robert Haas wrote:
Random thought: Could you compute the reference XID based on the page
LSN? That would eliminate the storage overhead.
After mulling this over a bit, I think this is definitely possible.
We begin a new
On 30 May 2013 19:39, Robert Haas wrote:
> On Thu, May 30, 2013 at 9:33 AM, Heikki Linnakangas
> wrote:
>> The reason we have to freeze is that otherwise our 32-bit XIDs wrap around
>> and become ambiguous. The obvious solution is to extend XIDs to 64 bits, but
>> that would waste a lot space. Th
On 30 May 2013 14:33, Heikki Linnakangas wrote:
> Since we're bashing around ideas around freezing, let me write down the idea
> I've been pondering and discussing with various people for years. I don't
> think I invented this myself, apologies to whoever did for not giving
> credit.
>
> The reaso
On Fri, May 31, 2013 at 1:26 PM, Bruce Momjian wrote:
> On Thu, May 30, 2013 at 10:04:23PM -0400, Robert Haas wrote:
>> > Hm. Why? If freezing gets notably cheaper I don't really see much need
>> > for keeping that much clog around? If we still run into anti-wraparound
>> > areas, there has to be
On Thu, May 30, 2013 at 10:04:23PM -0400, Robert Haas wrote:
> > Hm. Why? If freezing gets notably cheaper I don't really see much need
> > for keeping that much clog around? If we still run into anti-wraparound
> > areas, there has to be some major operational problem.
>
> That is true, but we ha
On 31.05.2013 00:06, Bruce Momjian wrote:
On Thu, May 30, 2013 at 04:33:50PM +0300, Heikki Linnakangas wrote:
This would also be the first step in allowing the clog to grow
larger than 2 billion transactions, eliminating the need for
anti-wraparound freezing altogether. You'd still want to trunc
On Thu, May 30, 2013 at 2:39 PM, Robert Haas wrote:
> Random thought: Could you compute the reference XID based on the page
> LSN? That would eliminate the storage overhead.
After mulling this over a bit, I think this is definitely possible.
We begin a new "half-epoch" every 2 billion transactio
On Thu, May 30, 2013 at 3:22 PM, Andres Freund wrote:
> On 2013-05-30 14:39:46 -0400, Robert Haas wrote:
>> > Since we're not storing 64-bit wide XIDs on every tuple, we'd still need to
>> > replace the XIDs with FrozenXid whenever the difference between the
>> > smallest
>> > and largest XID on
On Thu, May 30, 2013 at 04:33:50PM +0300, Heikki Linnakangas wrote:
> This would also be the first step in allowing the clog to grow
> larger than 2 billion transactions, eliminating the need for
> anti-wraparound freezing altogether. You'd still want to truncate
> the clog eventually, but it would
On 2013-05-30 14:39:46 -0400, Robert Haas wrote:
> > Since we're not storing 64-bit wide XIDs on every tuple, we'd still need to
> > replace the XIDs with FrozenXid whenever the difference between the smallest
> > and largest XID on a page exceeds 2^31. But that would only happen when
> > you're up
On 30.05.2013 21:46, Merlin Moncure wrote:
On Thu, May 30, 2013 at 1:39 PM, Robert Haas wrote:
On Thu, May 30, 2013 at 9:33 AM, Heikki Linnakangas
wrote:
The reason we have to freeze is that otherwise our 32-bit XIDs wrap around
and become ambiguous. The obvious solution is to extend XIDs to
On Thu, May 30, 2013 at 1:39 PM, Robert Haas wrote:
> On Thu, May 30, 2013 at 9:33 AM, Heikki Linnakangas
> wrote:
>> The reason we have to freeze is that otherwise our 32-bit XIDs wrap around
>> and become ambiguous. The obvious solution is to extend XIDs to 64 bits, but
>> that would waste a lo
On Thu, May 30, 2013 at 9:33 AM, Heikki Linnakangas
wrote:
> The reason we have to freeze is that otherwise our 32-bit XIDs wrap around
> and become ambiguous. The obvious solution is to extend XIDs to 64 bits, but
> that would waste a lot space. The trick is to add a field to the page header
> in
Heikki,
This sounds a lot like my idea for 9.3, which didn't go anywhere.
You've worked out the issues I couldn't, I think.
> Another method is
> to store the 32-bit xid values in tuple headers as offsets from the
> per-page 64-bit value, but then you'd always need to have the 64-bit
> value at h
Since we're bashing around ideas around freezing, let me write down the
idea I've been pondering and discussing with various people for years. I
don't think I invented this myself, apologies to whoever did for not
giving credit.
The reason we have to freeze is that otherwise our 32-bit XIDs wr
78 matches
Mail list logo