Krit,
It sounds like you might be hitting edge-case statistics problems on recent
data. We've had issues with this before, ourselves. I'd personally recommend
increasing your default_statistics_target to 400 for your entire database in
general. But it's possible that won't help this. Recent dat
These updates aren't equivalent. It's very important you know this, because
you're also inflating your table with a lot of extra updated rows.
Take the first UPDATE:
> UPDATE second SET time1 = orig.time1
> FROM orig
> WHERE second.key1 = orig.key1;
If you wrote this as a SELECT, it would look
Hi Krit,
It looks like your actual problem is here:
> Index Scan using t1_orderid_creationtime_idx on t1
> (cost=0.43..1181104.36 rows=9879754 width=158)
> (actual time=0.021..60830.724 rows=2416614 loops=1
This index scan estimates 9.8M rows, and had to touch 2.4M. The issue is that
your LI
nto the PG
build-farm. Ugh.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
this email
-
s some kind of issue with the maintainers of
Pl/Java, and he should take it up with them.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_
ng. If everything
were working properly, that excess 9GB would be in your cache.
Check /proc/meminfo for a better breakdown of how the memory is being
used. This should work:
grep -A1 Active /proc/meminfo
I suspect your inactive file cache is larger than the active set,
suggesting an overly aggr
can fiddle with these until you stop getting cancellations though.
Especially if, as you say, it's a slave that can fall behind safely.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
___
tement will only lock the rows being updated. You should
be able to add new rows, but with the IO consuming your RAID, you'll
probably see significant write delays that resemble lock waits.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676
ve to
drop the constraints after initializing the schemas and re-add them at
the end after the data import is complete.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See h
TABLE ' || schemaname || '.' || quote_ident(tablename)
|| ' DISABLE TRIGGER ALL;'
FROM pg_tables
) TO '/tmp/stop_triggers.sql'
After your data is loaded, just change DISABLE to ENABLE, and run the
script again.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jacks
stment doesn't work.
I've also been thinking about creating partial date indexes since
that's one of the few easy buckets to shove things into, which seems
like it would cut down on the searchable index size.
Properly applying partial indexes without overdoing it is a delicate
d
me. The question is, can you reduce the number of matches
through better index correlation, or more specific where clauses? The
amount of data you're matching will necessarily require a lot of
processing time.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd
ue to join 88 rows is much different than what's
needed for 300 times as many.
Given that, five seconds for 88 rows is astonishingly slow, even for
hstore. I'd be willing to bet something else is going on here.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Ch
way related to zone_reclaim_mode. The memory page aging algorithm in pre
3.7 is simply broken, judging by the traffic on the Linux Kernel Mailing
List (LKML).
I hate to keep beating this drum, but anyone using 3.2 (default for a
few Linux distributions) needs to stop using 3.2; it's hideousl
ubstantial to a table, it's much faster when the table isn't
so immense.
Even considering our hardware vastly outclasses what you have, it still
pays to keep table architecture "lean and mean."
Take a look here:
http://www.postgresql.org/docs/9.3/static/ddl-partitioning.html
> BTW if you do a hexdump of the file, it should be all zeroes.
Yep, that's what I get. :)
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
this email
--
Sent via pgsql-general mailing list (pgsql-general@post
> It's an autovacuum worker, which is expected. Just get rid of the
> file and all should be well.
That's what I figured, but I didn't want to make assumptions. Does removing the
file require a restart?
__
See http://www.peak6.com/email_di
This:
23334|429396427453c68b69.5b26|2014-07-16 09:25:45 CDT|ERROR: could not
access status of transaction 7150346
23334|429396427453c68b69.5b26|2014-07-16 09:25:45 CDT|DETAIL: Could not
open file "pg_multixact/offsets/006D": No such file or directory.
Been doing it every ten seconds s
> See here:
>
> https://wiki.postgresql.org/wiki/20140702pg_upgrade_fix
Thank $Deity! We'll schedule this ASAP. Our cluster is whining about this about
every 10 seconds, which is kind of a PITA.
Thanks again!
__
See http://www.peak6.com/email_discla
the discussion, and it *seems* we can delete the file and restart since
it's only an 8k file and we haven't gone far enough to wrap into a new
file. Will that actually work, though? Or is it too late, since something has
already requested that invalid transaction? What do we do?
http://pgxn.org/dist/pg_partman/doc/pg_partman.html
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions re
ER AS $$
BEGIN
IF NEW.period = 201001 THEN
INSERT INTO sn_dm_b.pm201001 VALUES (NEW.*);
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshous
.
But yeah, if you can get a successful pg_dump from your database, a
restore should obviously clean up all of your data and index
inconsistencies if you're willing to wait.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
x27;re right, I actually debated including the CTE syntax,
but you did it for me. :)
Though you fell into the same trap I did. The correct CTE should be:
WITH deleted AS (
DELETE FROM ONLY my_table
WHERE date_col <= now() - INTERVAL '15 minutes'
RETURNING *
)
INSERT I
g the
consistency of the new primary.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
this
OMMIT;
Or whatever. But you get the idea.
This way, you still get all the data by selecting from my_table, but the
data is partitioned in such a way that you can put the high turnover
table in another tablespace, or otherwise modify it for performance reasons.
--
Shaun Thomas
OptionsHouse, LL
ete:
DELETE FROM broken_table
WHERE ctid = 'whatever';
Then reindex the table so the correct values are properly covered. Doing
this for all of your corrupt tables may take a while depending on how
many there are.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Ch
#x27;, 1)::INT) as price
from operator;
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
1GB at a time until benchmarks stop increasing with your working
data size. I suspect your results will top out before you even hit 8GB
anyway.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
___
he triggers, but the triggers would still be
created normally as dependent objects of the extension, hence no errors.
However, it would be a lot nicer if we had some explicit syntax to avoid
this kind of situation.
--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL
On 05/02/2014 09:46 AM, Edson Richter wrote:
Yes, backup = pg_dump.
Can I use it on slave even on 9.2.7?
Yes. The reason I suggested 9.3 was assuming you were using
pg_basebackup, rsync, or a similar tool to backup the database binaries.
You should be fine with pg_dump.
--
Shaun Thomas
eed to run this command on the master system first:
SELECT pg_start_backup('some-label');
After the backup is done, run this on the master server:
SELECT pg_stop_backup();
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
for stuff
I've personally tested.
Our adventure with these cards was a presentation at Postgres Open in
2011. Slides are here:
https://wiki.postgresql.org/images/c/c5/Nvram_fun_profit.pdf
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho
lament about the situation regularly. ;)
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
this email
ked peer auth, then applied the mapping to find and apply
the database username.
Not sure why I had it backwards, but thanks. :)
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.
the best graphs on this page, depending on
what you buy.
We moved all of our performance-critical servers to NVRAM-based storage
years ago. For us, it was well worth the added expense.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho
backing, the state of this buffer is erased on power loss, resulting in
a corrupt database.
There are upcoming technologies which may solve this (see ReRAM) but for
now, it's a requirement for any sane system.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL,
ication failed for user
"dude-guy"
2014-04-02 10:39:59 CDT|DETAIL: Connection matched pg_hba.conf line 84:
"local all all peer map=local"
What's going on, here? Shouldn't it be checking for peer auth before
applying the mapping? Am I missing something?
--
Sha
r data, it is no
longer a database, but an application that mimics one.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for
27;ll get a
timestamp + date value with every entry. The postgresql.conf file should also
list all of the honored prefixes, or you can use this URL for more guidance:
http://www.postgresql.org/docs/9.2/static/runtime-config-logging.html
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd | Suite 400 |
https://github.com/peak6/wallctl
Oops. I just renamed this to:
https://github.com/peak6/walctl
I didn't realize the repo was created named wallctl instead of walctl.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshous
p the slaves current. Something of a poor-man's self-healing.
Anyway, give it a look. I'll be the one looking over pull requests, so
if you have some improvements, I'd be glad to see them.
Later!
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
31
y pull requests.
Enjoy!
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
this email
--
Sent vi
ould happen if I left it that way.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
this email
--
> Perhaps this might be of use.
>
> http://www.postgresql.org/docs/current/static/pgstatstatements.html
Nope. As I said in the original message, pg_stat_statements only gives query
stats for the whole database. What I want to know, is information about each
client. Say there's a specific conne
f the connection happens to be between transactions when
I poll pg_stat_statements, I get nothing.
I know pg_pool and pg_bouncer provide info like this, but we don't use those.
Is there anything internal to PG that can... eventually get it?
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd
> When I start postgres using postgres -D $PGDATA, it hangs, and I see that
> postgres and all the other attendant processes are running, but I never
> get my prompt back.
Don't start PostgreSQL with the 'postgres' command. That's the name of the
actual server process. You want to use the control
sible. We use a machine mounted on a remote SAN, whose only purpose
is to store backups. We have a full month available at any time, along
with all necessary WAL archives to do PITR. That's probably overkill for
most companies, but some variant of that is a good level of protection.
--
Shaun
em somewhere along the chain. Until that's
resolved, you will have random corrupt data on your most frequently
modified objects. Snapshot restores and WAL recovery can help repair the
issues, but it's only a matter of time before a corruption is written
right into the WAL
il the node passes, it is to be considered invalid and
unusable. If you haven't already, I suggest something similar for your setup.
Verify a crashed node before using it again, no matter how much pressure you're
under. It can always get worse.
--
Shaun Thomas
OptionsHouse | 141 W. Jacks
involved.
Look into tools like repmgr to handle managing your replicas as a cluster to
make forced invalidation and re-basing easier.
Good luck!
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
I'll add a routine to stall the standby backup until the
restartpoint corresponding to the pg_start_backup has been replayed. I'll see
if that helps.
Thanks!
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60
gt; file you would not be able to restore the data.
I suppose I could build in a function to pause the backup until the
restartpoint replays on the replica. Then at least, the backup "starts"
on both systems with the same assumptions.
--
Shaun Thomas
OptionsHouse | 141 W
hard links to save
space. I can back up a 800GB database in less than 20 minutes a night, or 45
minutes for a non-incremental backup. Without disturbing the primary node. Like
I said, I can enable filesystem snapshots to fix this, but it feels like
something more obvious is going on.
Any ideas?
subsequent
triggers are not fired, and the INSERT/UPDATE/DELETE does not occur for
this row)"
You could also raise an explicit error so the user sees something. To
fake a foreign key violation, you'd do:
RAISE EXCEPTION foreign_key_violation;
So you don't need a rollback anyway.
--
, if you have a
workload which triggers these issues, the workaround is to schedule
ANALYZE jobs.
Thanks Kevin, that explanation totally makes sense. I can switch to a
separate VACUUM and ANALYZE step to avoid this until the fix comes through.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd
essage?
This all seems a little sketchy.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
alyze, instead of 55 minutes while doing both.
Logs show no unexpected or unusual behavior, either. And of course, the
vacuumdb ran fine this morning. If it comes up again, I'll dig further,
but I can't find any reason why our database stopped analyzing itself
for four days.
Thanks for th
exclusion checks.
My guess based on the fact the planner has no concept of ranges aside
from their width, is that this is the approach it'll likely take. You
can't really teach it that your particular inheritance tree is range
constrained per level, so it has to check every rang
estimated partitions.
However you decide to do it, don't optimize prematurely. Monthly
partitions are enough for us and our 130GB table with 250M rows, and
that's for a single year.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago I
DO $$
DECLARE
tablename VARCHAR;
schemaname VARCHAR;
BEGIN
FOR schemaname, tablename IN
SELECT s.schemaname, s.relname
FROM pg_stat_all_tables s
WHERE s.last_analyze < CURRENT_DATE
LOOP
EXECUTE 'ANALYZE ' || quote_ident(schemaname) || '.' ||
_ctl and send a reload command.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
this email
--
Sent
fixed our particular case was this one:
http://www.postgresql.org/message-id/50e4aab1.9040...@optionshouse.com
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www
behind, it can
always catch up again.
Honestly in sync rep, I'm surprised the master doesn't keep segments
until a waiting slave either disconnects, or can consume WAL files being
held up by a long-running transaction. Not that it matters, since you
can fake that behavior th
system peaks at 18k TPS
and handles roughly a billion queries per day.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms
t a much faster result. That would also allow you to drop
history_creator_index. Since history_lookup_lookupid_index covers the
same first two columns, you shouldn't lose anything in queries that work
better with those in the front.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. |
have udev
installed. You *could* put blockdev calls in /etc/rc.local I suppose,
but udev applies rules at device detection, which can be beneficial.
I assume both. I should ask the same for noatime advice while I'm at
it.
You can probably get away with relatime, which is the default for
a2.procpid = l2.pid)
WHERE l1.granted
AND NOT l2.granted;
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions re
's because of the statement that you see when you
cancel. Something tells me that if you try this again, it'll be the same
foreign key check.
Look and make sure account_id in ibmgbs_values is the same exact type as
the referenced table.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson
listed column, and it's the same datatype (bigint). Otherwise, this is
going to take a long, long time.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak
eing added
to the test. Regular reviews are also necessary to catch questions that
become invalid with new PG releases.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See ht
cost, but that was back on kernel 3.2.31, so maybe that'll change.
Thanks for the info!
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_
x27;t help, say something. The more examples we get of edge cases
confusing the planner, the quicker they'll get addressed.
This is an awesome little test case, though. How dramatically it
explodes really is something to behold.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd.
code = 'AMA'::text))
These are pretty drastically different. But... the first crippled my
test system and started consuming vast resources, while the second
executed in about 300ms. That's hard to believe with the row counts seen
here unless it's turning it into some invisib
always comes first, and eliminates other candidates.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions re
.
You should still upgrade those, of course, but it shouldn't cause problems.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer
the problems in 8.3.
http://bonesmoses.org/2010/06/03/why-i-married-pg_migrator/
It worked way back then for the company I was working for at the time.
Could work again. :)
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
7;t care how long it takes, you can replace lzop with lbzip2 or
something you can use in parallel. This will take 4-8x longer, but can
use up to 30% less bandwidth based on tests I've run.
Otherwise, I'd recommend just using pg_basebackup.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson
can be indexed.
Of course, you have to install it first. Take a look here:
http://www.postgresql.org/docs/8.4/static/pgstatstatements.html
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
good.
I have read about using pg_basebackup in an article from Shaun
Thomas' booklet on Packt Publishers**(I will probably buy the
booklet)*. *That seems to be a possible solution.
Ok, with pg_basebackup, you'll get a binary backup of the actual data
files involved in your databa
everything at once. That's not always an option for
everyone though. :)
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer
epub, but there ya go.
Maybe it takes a while to associate them.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and
On 04/03/2013 12:49 PM, Igor Neyman wrote:
Is there plans for e-book edition?
I believe there already is one. Packt sells it directly, and it's also
listed on Amazon. Way cheaper than the print version, I think. About
five bucks, as opposed to 20.
--
Shaun Thomas
OptionsHouse |
exactly Greg Smith's performance book, but I'm glad to
contribute how I can. I'm not entirely sure it's worth adding to the
book page:
http://www.postgresql.org/docs/books/
But if it is, I'll provide any help or extra information necessary. If
anyone has questions,
ould still be useless for calculations in applications requiring more
significant figures, but would make more sense than the currently
magically morphing value it is now.
"Hey, we just shipped a DB server to Japan, and now all of the monetary
values are wrong. WTF!"
Yeah... no.
tremely happy to see the recent improvements in numeric performance
that seem to be coming in 9.3. :)
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/emai
ation system. In the
end, we regained about 50GB of "phantom" space after a re-mount, and
it's stayed that way since.
But that's what du --apparent-size is for. :)
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
u clearly have some perspective I don't. From where I'm
sitting though, I don't get the barely suppressed rage. ;)
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
_
On 03/13/2013 10:30 AM, Greg Jaskiewicz wrote:
Is that SSD mixed in with other disks?
Kinda. We chose a PCIe-based SSD (FusionIO). We have a RAID-10 for
low-transaction and archived data.
It worked for us, but it's pretty spendy.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. |
NVRAM (SSD) for high TPS data, and creating a tablespace on
a RAID-10 for archived or low-priority data.
But we got by on those original 12 spindles for a couple years. If your
data needs are less, you can probably do OK with six. For a while,
anyway. :)
--
Shaun Thomas
OptionsHouse | 141 W. Jac
atest is 9.1.8, that might be it.
It would explain why my contrived scenario in stage couldn't replicate
it, too. I'm going to see if I can trigger this behavior in stage by
creating a bunch of dead tuples in the slony log table while it's working.
Thanks, Kevin!
--
Shaun Thomas
at broke it or why it was acting that way when that
clearly isn't the design goal.
Ah well.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/e
l pretty much have to drop Slony then. I just
want to keep it working until then. :)
Thanks for the info!
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.p
ws would be terrible with
this design.
I plan on asking the slony guys too, but I figured someone on this list
might happen to know.
Looks like I might have to look into Bucardo or Londiste.
Thanks!
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 606
. :)
Thanks, again!
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
this email
--
Sent via pgsql-
to forward to AD), and that works graet, but has the same
problem. If the user is presented with a PW prompt more than once in a
row, something has failed.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
___
n the admin team brought up Kerberos as a way to let the
underlying system punt through to the LDAP server, so we're
investigating that instead. If we then strongly encourage people to not
use .pgpass and just let kerberos cache their credentials, that should
take care of it. Maybe.
--
Sha
lem. I don't think I'd even
want a restricted set of root users able to see my LDAP password in
plain text.
Has anyone put thought into combining LDAP and .pgpass, or has it simply
been abandoned every time the issue has presented itself?
Thanks in advance!
--
Shaun Thomas
Op
ur older RHEL systems crashed like it was their job when we
were using our onboard Broadcom NICs. Maybe we're expecting too much.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
___
se thought.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com
__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to
this email
--
Sent via pgsql-gener
1 - 100 of 150 matches
Mail list logo