pg_dump is a libpq client, and thus will read the environment for a
variable with the password. no need to emulte any command prompt tty
operations.
On Fri, Nov 17, 2017 at 4:06 PM, marcelo wrote:
> I need to "emulate" the pg_dump code because the password prompt. Years
> ago I write a program (
On Wed, Sep 20, 2017 at 10:10 AM, Job wrote:
> We noticed that if we import directly into the global table it is really,
> really slow.
> Importing directly in the single partition is faster.
>
>
Do you have a rule or trigger on the main table to redirect to the
partitions? You should expect that
On Mon, Sep 11, 2017 at 9:02 PM, Yogesh Sharma
wrote:
> Dear All,
>
>
> We have one perl script that is opening DB connection and performaing
> insert DB operation.When duplicate entry Comming in query,Postgres
> forecfully killing process itself and in Postgres log "unexpected EOF on
> client co
On Wed, Sep 6, 2017 at 1:16 AM, Vijay Chaudhery wrote:
> Ok is it possible to release the lock? or it is just a warning messages
> only?
> thanks
>
The message starts with "ERROR" so it is an error, not a warning. The
command failed. The only way to release the lock is to have the other
connecti
On Tue, Aug 8, 2017 at 12:36 PM, Tom Lane wrote:
> Hmmm ...
>
> > SHELL=/bin/tcsh
>
> Mine's bash ... I wonder whether that could make a difference here?
> I'm pretty sure the PG makefiles aren't set up for csh syntax.
> But I can't see how that would make it work in the manual case
> and not whe
What exactly does the configure flag to enable systemd support do? It seems
to me that building software to the systemd platform is just the same as
building it for windows vs unix or any other platform. One can only hope it
doesn't cause the others to wither away.
On Wed, Jul 12, 2017 at 3:20 AM,
I've not done a formal study, but I've always found that throwing hardware
at the problem does wonders. My current database I made faster by bumping
RAM until the entire working set fits in memory. The server has 256GB of
RAM, half of which is used by ZFS for its purposes, and the other half for
Po
I'm curious why you have so many partial indexes. Are you trying to make
custom indexes per query? It seems to me you might want to consider making
the indexes general, and remove the redundant ones (that have the same
prefix list of indexed fields).
Secondly your table is 102Gb. Clearly there's a
1) restore from backup
2) fix whatever configuration you made to let windows (or your hardware)
destroy your data on crash. is there some RAID cache that is not backed up
by a battery?
On Wed, Apr 19, 2017 at 10:18 AM, Alexandre wrote:
> Hello,
>
> The computer had a unexpected shutdown, it is
On Tue, Apr 4, 2017 at 4:15 AM, Kyotaro HORIGUCHI <
horiguchi.kyot...@lab.ntt.co.jp> wrote:
> The previous expression intended to avoid decimal arithmetic, but
> gcc optimizes the simple division better (using cmovns-add-sar)
> than the current INT64_AU32 (jmp-sar) so I changed it. This
> doesn't
On Thu, Mar 30, 2017 at 6:10 AM, Moreno Andreo
wrote:
> Since I'm on Google Cloud Platform, I thought it would be a good idea to
> see what it offers.
They currently have in beta a Postgres flavor of their cloudsql. I haven't
used it yet, but I plan to test it sometime in the next couple of mon
Since you're using bash, I will assume you are not averse to using a
slightly complicated pipeline. First, install this:
https://github.com/wireservice/csvkit
Then use that to cut out the columns, you want and pipe the result into
psql with an appropriate \copy command.
On Wed, Mar 8, 2017 at 4:1
On Tue, Mar 7, 2017 at 2:02 AM, Panagiotis Atmatzidis
wrote:
> I want to make a clone of database1 which belongs to user1, to database2
> which belongs to user2. Database1 has 20+ tables. I want to avoid copying
> the DATA sitting on 5 tables on database1 (many Gigs).
>
> I've read one too many p
On Wed, Feb 15, 2017 at 10:32 AM, pinker wrote:
> DELETE FROM table_name WHERE ctid = any ( array ( select tn.ctid from
> table_name tn JOIN items i on tn.itemid=i.itemid WHERE tn.clock < extract (
> epoch FROM now() - i.history * interval '10 day')::int + 6 limit 100));
>
> Could I be sure that
On Wed, Jan 18, 2017 at 7:32 AM, PAWAN SHARMA
wrote:
> Hello All,
>
> I am using postgres 9.5 enterprise edition and postgres 9.5 open source
> where i want know solution of two problems.
>
> 1.How can we restore single database from base backup files only, I don't
> have pg_dump backup.
>
Re
On Tue, Jan 10, 2017 at 1:39 PM, Melvin Davidson
wrote:
> *Why not just kill the blocking process first? Then everything else will
> proceed.*
>
That's what he said to do. You can do that with a `SELECT
pg_cancel_backend($pid)` query.
On Wed, Dec 21, 2016 at 11:31 AM, Tom Lane wrote:
> Well, we're picky to the extent that RFC 3629 tells us to be picky:
> http://www.faqs.org/rfcs/rfc3629.html
>
And I'm *GLAD* it is that way. Who wants garbage in their database? :)
On Wed, Dec 21, 2016 at 8:43 AM, Yogesh Sharma
wrote:
> I have responded on previously mail.
>
>
apparently you started two separate threads...
> In my production system, there are lot of read write operation performed
> every hour.
>
That still doesn't answer why you feel you need to run rein
On Wed, Dec 21, 2016 at 2:56 AM, Kyotaro HORIGUCHI <
horiguchi.kyot...@lab.ntt.co.jp> wrote:
> A PostgreSQL database with encoding=UTF8 just accepts the whole
> range of Unicode, regardless that a character is defined for the
> code or not.
Interesting... when I converted my application and databa
On Wed, Dec 21, 2016 at 4:02 AM, Yogesh Sharma wrote:
> Also, every hour,i am performing VACUUM and REINDEX operation on table.
Why?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, Dec 7, 2016 at 8:33 AM, Michael Sheaver wrote:
> with this for a couple days about a year ago, the workaround I found that
> works is to first import it into a MySQL table, strip out the characters in
> MySQL, dump the data out to a CSV and finally bring the sanitized data into
> Postgres
On Wed, Nov 23, 2016 at 1:16 PM, Israel Brewster
wrote:
> I was wondering if someone could inform me about, or point me to an online
> article about best practices for backing up a postgresql database cluster?
> At the moment, I have two servers running with streaming replication for
> failover p
On Thu, Nov 17, 2016 at 8:08 PM, Rakesh Kumar
wrote:
> I noticed that as I scale from 5 to 10 to 20 to 40, the TPS starts falling
> almost linearly :
>
> with 5, TPS was doing 639
> with 10 TPS was down to 490
> with 20 TPS was down to 280
> and so on.
Are the TPS numbers per pgbench? If so, the
On Mon, Nov 7, 2016 at 8:23 AM, Howard News wrote:
> pg_restore: executing SEQUENCE SET example_seq
> pg_restore: processing data for table example_table
> pg_restore: [compress_io]
>
> ** crash **
What crashes? the pg_restore process or the backend server?
--
Sent via pgsql-general mailing li
On Fri, Oct 21, 2016 at 4:53 PM, Jason Dusek wrote:
> This is really only a temporary fix, though. We can have a cron job running
> in the background running TRUNCATE ONLY ... but this seems like the kind of
> thing that auto-vacuuming should have handled for us, before the problem got
> “too larg
On Fri, Sep 30, 2016 at 5:11 AM, John R Pierce wrote:
> On 9/30/2016 2:06 AM, Rakesh Kumar wrote:
>>
>> We require complete data isolation. Absolutely nothing should be shared
>> between two tenants.
>>
>> WHy would multiple dbs be any worse than multiple schemas in performance?
>
>
> complete? u
On Fri, Sep 30, 2016 at 6:06 AM, Rakesh Kumar
wrote:
> A typical fear mongering Q from
> them "what if due to a bug in your s/w, our competitors end up looking at our
> data" or
> something like that. That's why schema level vs db level discussion.
So... if your software isolates customers based
On Mon, Sep 12, 2016 at 3:46 PM, Lee Hachadoorian
wrote:
> * Because database is updated infrequently, workforce can come
> together for LAN-based replication as needed
> * Entire database is on the order of a few GB
Just update one copy, then send pg_dump's to the others for stomping
over the ol
On Sun, Sep 4, 2016 at 4:37 PM, Patrick B wrote:
> That's great news! My only concern is about the "RSYNC" - Hope that doesn't
> take long!!!
>
> This all steps must be performed by me on the next few days/weeks - I'll
> keep you guys updated... Keen to see the new DB running in a SSD environment
On Thu, Sep 1, 2016 at 1:44 PM, Mike Sofen wrote:
> $100k/ $620 = 161 months of operation before cost parity.
You ought to maybe compare what you can buy from an outfit like
Silicon Mechanics for ~$15k. I suspect you could get 16-core, 256GB
RAM, and several TB of disk. And then your parity would
On Tue, Aug 30, 2016 at 7:10 AM, Nicolas Grilly
wrote:
> Let's say we have a table containing data for 10,000 tenants and 10,000 rows
> per tenant, for a total of 100,000,000 rows. Let's say each 8 KB block
> contains ~10 rows. Let's way we want to compute the sum of an integer column
> for all ro
On Wed, Jul 27, 2016 at 3:28 AM, Anton Ananich wrote:
> In my situation this order is invalid. Obviously, year 2016 should go after
> 2014, like that:
I think you expect JSONB to sort differently than it does. I cannot
imagine what a "natural" ordering of arbitrary JSON objects is.
--
Sent via
binary replication requires the versions be identical. Also, once you ran
pg_upgrade you altered one of the copies so binary replication can no
longer work on that either.
On Sun, Jul 3, 2016 at 11:06 AM, Andy Colson wrote:
> Hi all,
>
> I have a master (web1) and two slaves (web2, webserv), one
On Thu, Jun 16, 2016 at 10:32 AM, Jean-Francois Prieur
wrote:
> 1) Is there any way to tell pg_dump the order in which the tables should
> be dumped?
> 2) Am I correct to assume that if I use the --disable-trigger option in
> pg_restore it should mitigate the problem? This only works on a data-on
Good old fashioned "print" :)
I tend to use RAISE DEBUG, and look in the logs. The most complicated
procedures we have in our system are enforcing state diagram changes for a
given field that tracks an object status, so maybe this doesn't scale well.
On Wed, May 4, 2016 at 12:43 PM, Guyren Howe
On Thu, Apr 21, 2016 at 9:00 AM, Alex Ignatov
wrote:
> Ms Windows doesnt have ZFS support. AIX also doesnt. Z/OS also. Any other
> commercial Linux distros don't have ZFS support. Yes you can compile it and
> use on production but...
> But PG runs on the above OS, but have check sum off by defau
On Wed, Apr 20, 2016 at 3:43 AM, Alex Ignatov
wrote:
> What do you think about defaulting --data-checksums in initdb?
>
I think that ZFS storing my database files already does this and can
correct for it using replicated copies, so why do I need a second layer of
checksums?
On Fri, Mar 18, 2016 at 5:46 PM, Adrian Klaver
wrote:
> They should be able to, see below. If that is not your case, then more
> information is needed.
>
You can see your own queries, however non-superuser will not see the query
for other users. You will be able to see the other info, though.
I
On Wed, Mar 16, 2016 at 9:34 PM, drum.lu...@gmail.com
wrote:
> The problem is that I need to do that at the same time, because of a
> constraint:
>
>
Mark your constraint as deferrable, and then defer the constraints within
your transaction.
100 is an integer
power(10,2) is a double precision.
Try this one:
SELECT floor(4.725 * 100::double precision + 0.5);
On Mon, Mar 14, 2016 at 10:11 AM, Frank Millman wrote:
> Hi all
>
> I am running PostgreSQL 9.4.4 on Fedora 22.
>
> SELECT floor(4.725 * 100 + 0.5) returns 473, which is wh
On Tue, Feb 2, 2016 at 4:36 PM, John McKown
wrote:
> (3) save space on the various archives.
(4) loses historical information when the linked document goes away.
On Sun, Jan 24, 2016 at 11:12 PM, drum.lu...@gmail.com
wrote:
> So, I made a SELECT to get some data, to see where's the issue:
>
> SELECT * FROM integrations.accounts WHERE qb_settings IS NULL OR
> xero_settings IS NULL OR freshbooks_settings IS NULL OR myob_settings IS
> NULL OR ppy_settings IS
On Fri, Jan 15, 2016 at 12:03 PM, FarjadFarid(ChkNet)
wrote:
> Because of the current political environment we live in, even though I am
> neither a Muslim nor a Jew I am a Baha'i, I think we should not discuss
> religion or politics on this forum. All such topics can be discussed
> privately.
On Wed, Jan 13, 2016 at 11:51 AM, Jeff Janes wrote:
> Anyway, it looks to me like you have a checkpoint problem. The checkpoint
> overwhelms your IO system. The overwhelmed IO system then backs up into
> the bgwriter. What you see in the bgwriter is just a symptom, not the
> cause. The backgr
On Wed, Jan 13, 2016 at 3:03 AM, Andreas Joseph Krogh
wrote:
> Seems like pglogical migth be better?
> http://2ndquadrant.com/en/resources/pglogical/
>
>
I would have no idea. I never used it, nor do I run RHEL (or any linux for
that matter) which seems to be the only supported OS for that softw
On Tue, Jan 12, 2016 at 4:20 PM, Andreas Joseph Krogh
wrote:
> I'm moving a > 500GB DB to another server which is initdb'ed with a
> different locale (nb_NO.UTF-8 to get correct collation when ORDER BY).
> Is there another option than pg_dump/restore for doing this?
>
I recently used slony to mo
On Mon, Jan 11, 2016 at 6:55 PM, Steve Litt wrote:
> All because somebody just *had* to personally insult someone else,
> repeatedly, and nobody thought that was a bad thing, and when the
> recipient finally objected, the objection was chalked up to him or her
> valuing his/her victimhood.
+1
I
On Tue, Nov 3, 2015 at 10:07 AM, Sami Pietilä
wrote:
> Unfortunately I could not figure out how to select rows which, for
> example, contain following json: '{"a":"world","c":{"b":"helloworld"}}' by
> search with "hello" string.
>
cast the field to a text:
select * from t where myfield::text li
On Tue, Oct 20, 2015 at 4:04 AM, Andreas Kretschmer <
akretsch...@spamfence.net> wrote:
> If you are returning JSON, so is JSON the best choice.
>
The JSONB type represented as text (i.e., when you query it) is valid JSON.
The same with JSON data type. The DB server renders the data as text when
On Thu, Oct 8, 2015 at 3:49 AM, wrote:
> Is there an advantage of hstore vs. json/jsonb?
>
Definitely over json because that is not indexable. I'm not seeing an
advantage over jsonb unless you want to prevent storing complex data
structures.
On Mon, Sep 7, 2015 at 4:48 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> IIRC The planner doesn't understand
>
> overlaps so having a definition of:
>
> IN (1,2,3,4,5); or nearly equivalently = ANY(ARRAY[1,2,3,4,5]))
>
> and a request for:
>
> IN (1,3,5) / = ANY(ARRAY[1,3,5]) i
On Wed, Jul 8, 2015 at 10:17 PM, Francisco Reyes wrote:
> Anyone knows of a way to test the SSL connection such that it validates
> against the CA? Preferably an open source application. Connecting through
> psql works fine on SSL with what I have setup, but the application, xtuple,
> seems to st
On Wed, Jul 8, 2015 at 2:46 PM, John R Pierce wrote:
> but what security does that gain you?if someone gets your
> encrypted/hashed password, he can still log on. the pgpass file has to be
> permissions 700, so only YOU (and root) can read it.
>
Exactly this. If you want a script to authen
On Thu, Jul 2, 2015 at 10:56 AM, Joseph Kregloh
wrote:
> With my dataset I have been able to take advantage of the L2ARC. Currently
> using about 80GB on ARC and 260GB on L2ARC. With the ARC currently having
> the greater Hit ratio.
>
Did you tell postgres that the effective_cache_size = Shared
On Wed, Jul 1, 2015 at 5:07 PM, Joseph Kregloh
wrote:
> We recently built a new server for our Production database. The machine is
> top of the line with 128GB of RAM, dual E5-2650. We also included NVME
> drives for ZIL and L2ARC. Currently we have 3 zpools. First one holds the
> FreeBSD install
On Fri, Apr 24, 2015 at 9:06 AM, Greg Spiegelberg
wrote:
> *LINE 3: SELECT COALESCE(ts1.user_id, ts2,user_id, ts3.user_id) AS us...*
>
You want ts2.user_id not ts2,user_id
On Thu, Apr 23, 2015 at 9:36 AM, Job wrote:
> We have a table, about 500Mb, that is updated and written every day.
> When machines updates, table is truncated and then re-populated with
> pg_bulk.
> But i think we strongly writes when importing new data tables..
>
so this is static data you upda
On Thu, Apr 23, 2015 at 7:07 AM, Job wrote:
> Are there some suggestions with SSD drives?
> Putting the DB into RAM and backing up periodically to disk is a valid
> solutions?
>
I have some very busy databases on SSD-only systems. I think you're using
SSDs that are not rated for server use.
You
On Fri, Mar 20, 2015 at 1:55 PM, "Leonardo M. Ramé"
wrote:
> Well, pg_load_conf didn't reoload the timezone config from
> postgresql.conf, I had to restart the service to be changed.
>
That's par for the course for pretty much any long-living process. The time
zone is initialized once and only o
On Fri, Mar 13, 2015 at 5:59 AM, Job wrote:
> The application i use need lots of static DB connections: in the 9
> version, how many connections can be declared in postgresql.conf?
> I searched for a max number but i noticed it is not specified.
>
It depends on how much shared memory you give to
On Fri, Mar 6, 2015 at 5:59 AM, pinker wrote:
> I have deleted a large number of records from my_table, which originally
> had
> 288 MB. Then I ran vacuum full to make the table size smaller. After this
> operation size of the table remains the same, despite of the fact that
> table
>
If your re
On Thu, Feb 26, 2015 at 9:50 PM, Melvin Call
wrote:
> Montreal where the e is an accented e. The output ends up in the text file
> as
> Montr\xe9al, where the xe9 is a single character. When I try to copy that
> into
> my PostgreSQL table, I get an error "ERROR: invalid byte sequence for
> encod
Fix the character sequence to be valid UTF8 before you restore it. I'm
assuming you're restoring from a pretty old version of Postgres which did
not do strict UTF8 character validation. Are you sure the data is encoded
as UTF8 and not some other?
On Tue, Feb 24, 2015 at 2:15 AM, Mitu Verma wrote:
> What could be the possible impacts of leap second on June 30 2105 (which
> will make the one second longer time) at PostgreSQL database ?
>
>
As an experiment, try setting the time to the leap second and see if
postgres stores it as you expect.
On Fri, Feb 13, 2015 at 11:29 AM, Seref Arikan <
serefari...@kurumsalteknoloji.com> wrote:
> Hi Bill,
> Could you point at some resource(s) that discuss inserting directly into
> the partition? Would it be possible to read directly from the partition as
> well?
>
> When preparing your SQL statemen
On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun wrote:
> Does anybody have experience with huge number of partitions if so where
> did you start running into trouble?
>
I use an arbitrary 100-way split for a lot of tracking info. Just modulo
100 on the ID column. I've never had any issues with that.
I'd restructure the table to be split into perhaps 100 or so inherited
tables (or more). That many rows in a table are usually not efficient with
postgres in my experience. My target is to keep the tables under about 100
million rows. I slice them up based on the common query patterns, usually
by s
On Tue, Dec 16, 2014 at 7:49 AM, Luca Ferrari wrote:
>
> Am I missing something?
>
FWIW I tried his a few times, in FreeBSD 9.3 and 10.0 with Pg 9.2 I
believe. I've not ever had it build successfully. I don't know why it is an
option on the port if it doesn't work. I never filed a bug report beca
On Sun, Dec 14, 2014 at 1:38 AM, Michael Nolan wrote:
>
> Yeah, a cron job to swap pg_hba.conf files is the best solution I've come
> up with so far. It's not one web app, it's closer to two dozen of them, on
> multiple sites.
>
Use time-based firewall rules to block them?
On Sat, Dec 13, 2014 at 1:55 PM, Edson Carlos Ericksson Richter <
edsonrich...@hotmail.com> wrote:
>
> I've been searching in web for guidelines on OS (Linux) and PostgreSQL
> (9.3.5) active monitoring best practices.
>
Recent trends are more toward monitoring response latency by first
establishin
On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane wrote:
> needed to hold relcache entries for all 23000 tables :-(. If so there
> may not be any easy way around it, except perhaps replicating subsets
> of the tables. Unless you can boost the memory available to the backend
>
I'd suggest this. Break
I don't think autovacuum can predict you will not update your table(s)
anymore, so there's no way to know to run FREEZE on it.
On Thu, Dec 4, 2014 at 12:02 PM, Mike Blackwell
wrote:
> check_postgres.pl (--action=autovac_freeze) recently complained that we
> needed to run VACUUM FREEZE. Doing so
On Wed, Nov 26, 2014 at 10:53 AM, Joseph Kregloh
wrote:
> One thing I am pushing to do is using SSDs for the ZIL and L2ARC. This
> would allow for a pretty nice boost in speed.
>
I use ZFS on freebsd as my large production server configuration as well. I
have SSDs for the L2ARC and that works *r
On Wed, Nov 26, 2014 at 1:57 PM, Dev Kumkar wrote:
> Can you let me know what exact details should be provided here?
>
You could start with the version of Postgres you're using.
On Tue, Nov 18, 2014 at 11:49 AM, Robert DiFalco
wrote:
> Either I'm not that smart or I am working on too many things at once (or
> both) but making Full Text work seems super tedious. I just have a single
> VARCHAR field for name, so the full name "William S. Burroughs" is a single
> row and co
It should take about as much time as cp -r of the directory for that
DB would take.
On Tue, Oct 14, 2014 at 1:58 PM, Adrian Klaver
wrote:
> On 10/14/2014 10:43 AM, Matthew Chambers wrote:
>>
>> Thanks,
>>
>> How long did it take to complete?
>>
>
> I did not time that copy. I just tried with a s
On Mon, Oct 13, 2014 at 12:11 AM, Stephen Davies wrote:
> I have fixed this by manually granting access where necessary but wonder
> whether the original issue is a bug or something that I have missed in the
> migration.
pg_dump emits the necessary GRANTs for the tables.
Did you use pg_dumpall -
Using my magick powers of mind reading, I will guess you made circular
dependencies.
On Fri, Oct 3, 2014 at 12:17 PM, Tim Mickelson
wrote:
> Even if I try to delete the data entry in the table channel_mapping with
> idaut 1622, it is not possible to delete from the table
> tmp_autenticazionesocia
On Tue, Sep 23, 2014 at 2:29 AM, Abelard Hoffman
wrote:
> Typically, applications open the port as root and then change to low
> privilege for the actual service.
> See also this thread:
>
> http://stackoverflow.com/questions/413807/is-there-a-way-for-non-root-processes-to-bind-to-privileged-ports
On Mon, Sep 1, 2014 at 12:11 PM, Tom Lane wrote:
> I wonder whether this was a bad idea. I think it's unsurprising for the
> definition of "alphanumeric" to depend on locale, but I bet most people
> are not expecting \d to vary that way.
FWIW, tha Perl man page on unicode (perldoc perlunicode) s
On Mon, Jun 9, 2014 at 8:55 PM, Jonathan Vanasco wrote:
> I can't figure out which one to use. This is on a steadily growing
> table of around 20MM rows that gets 20-80k new records a day, but existing
> records are rarely updated.
The question as always is a time-space trade-off. How
On Tue, Apr 29, 2014 at 10:22 AM, Steve Clark wrote:
> We have a small (2GB) 8.4.20 database. The prior maintainer ran a reindexdb
> twice a day (originally started with 7.3.x). Can anybody see a reason why we
> need to
> continue to do this?
I used to re-index regularly with earlier 6.x and 7.x
On Tue, Apr 8, 2014 at 11:28 AM, Hannes Erven wrote:
> On 2014-04-08 15:27, Vick Khera wrote:
> [...]
>>
>> Problem 2: you cannot have a second process skip over locked rows.
>
>
>
> In fact, you can: use "FOR UPDATE NOWAIT" and catch any errors.
>
&
On Thu, Apr 3, 2014 at 4:00 PM, John R Pierce wrote:
> an important thing in getting decent wear leveling life with SSDs is to keep
> them under about 70% full.
You have to do that at provisioning time in the drive. Ie, once you
layer a file system on it, the drive doesn't know what's "empty" and
On Fri, Apr 4, 2014 at 12:03 AM, François Beausoleil
wrote:
> Our workload is lots of data import, followed by many queries to summarize
> (daily and weekly reports). Our main table is a wide table that represents
> Twitter and Facebook interactions. Most of our reports work on a week's worth
>
Problem 1: how (and why) are you locking rows in table Y?
Problem 2: you cannot have a second process skip over locked rows. It
sounds to me like a job queue... what you want to do is make it work
*really* fast to mark the job as taken and commit that work so the
locks are released. Then go do the
I'm looking at some performance tuning for a server running FreeBSD and the
DB on top of ZFS.
I keep reading the recommendation to turn off the primary data cache in ZFS
since postgres does its own caching. The recommendations also say to set
effective_cache_size to a significant amount of the RAM
On Wed, Jan 15, 2014 at 4:10 AM, Gábor Farkas wrote:
> or in other words, when are two varchars equal in postgres? when their
> bytes are? or some algorithm is applied?
>
On this topic, when I write my strings to the DB and search from the DB,
should I canonicalize them first as NKFC (or some oth
no. use cron to run a query that does what you need.
On Wed, Dec 11, 2013 at 6:38 AM, Dev Kumkar wrote:
> How to create scheduled events in postgres simillar to whats event in
> Sybase.
> Is there any method of doing so?
>
> Also am looking at PgAgent which can create jobs but is it similar lik
On Tue, Dec 3, 2013 at 4:04 PM, john gale wrote:
> Does this suggest that VACUUM FULL needs free disk space on the order of
> the full size of the table that it's vacuuming to be able to complete? Or
> does it / can it write the filesystem files in the 1GB chunks stored in
> /base while removing
On Tue, Nov 26, 2013 at 4:48 PM, Bruce Momjian wrote:
> Right. I know of no mechanism to verify a certificate via a public CA
> through SSL. Browsers have a list of trusted certificates, but SSL
> alone doesn't, as far as I know.
>
SSL as a library/protocol has mechanisms to verify the certific
On Tue, Nov 26, 2013 at 2:48 PM, Joey Quinn wrote:
> The ranges are indeed overlapping, though the update statements were
> generated alphabetically rather than in IP order... If the command line
> will let me query the table directly without being blocked by the ongoing
> updates, then I could g
On Tue, Nov 26, 2013 at 1:31 PM, Bruce Momjian wrote:
> Well, by using a CA you are giving the CA rights to the key, while you
> fully control a self signed key. Since you probably don't expect
> unknown individuals to be connecting to your database, and self signed
> key is recommended.
>
You
On Tue, Nov 26, 2013 at 12:24 PM, Joey Quinn wrote:
> When I ran that command (select * from pg_stat_activity"), it returned the
> first six lines of the scripts. I'm fairly sure it has gotten a bit beyond
> that (been running over 24 hours now, and the size has increased about 300
> GB). Am I mi
On Tue, Nov 26, 2013 at 12:11 PM, Joey Quinn wrote:
> update ipv4_table set country='xx' where ipv4 between 'xxx.xxx.xxx.xxx'
> and 'xxx.xxx.xxx.xxx';
>
> There are 127k lines like that (each with a different range and the
> appropriate country code). Each is terminated with a semi-colon. Does th
Connect to the DB and run "select * from pg_stat_activity" to see what
specific query your other connection is running. Then find that in your
file to see how far it has progressed.
I hope you profiled your queries to make sure they run fast before you
started. :)
On Tue, Nov 26, 2013 at 10:28
On Mon, Oct 14, 2013 at 6:31 AM, Florian Nigsch wrote:
> My question is then - where does this error come from? Is is because
> Postgres allocates the same name (table1_lower_idx) twice when the index
> begins building, because at that time there's no index present with that
> name? But if one in
On Tue, Oct 15, 2013 at 2:38 PM, Quang Thoi wrote:
> I want to roll back deletion if there is a reference (FK) in another table.
>
>
> Can I explicitly call a rollback inside a function?
>
You should RAISE an error. The transaction should roll back due to the
error.
http://lmgtfy.com/?q=postgres+ETL+tools
On Mon, Oct 7, 2013 at 11:02 AM, sunpeng wrote:
> Hi, Friends, are there any ETL tools (free or commercial) available for
> PostgreSQL?
> Thanks!
> peng
>
On Thu, Sep 19, 2013 at 11:31 AM, ascot.m...@gmail.com wrote:
> I use PG 9.2.4 with streaming replication. What will be the manual
> procedure to failover from Primary to Standby and Set the old Primary as a
> new standby?
>
>From what I understand, you start over by setting up the old primary
1 - 100 of 417 matches
Mail list logo