On Tue, Dec 22, 2015 at 04:52:23PM -0600, oleg yusim wrote:
> Greetings,
>
> I'm looking at the following security control right now:
>
> The DBMS must prevent unauthorized and unintended information transfer via
> shared system resources.
>
> The purpose of this control is to prevent informatio
On Wed, Dec 23, 2015 at 07:07:31AM -0600, oleg yusim wrote:
> May we run into situation, when attacker dumps memory and analyses it
> for valuable content, instead of reserving it for own process, where
> it would be zeroed? My understanding, it is a possibility. Does kernel
> have any safeguard a
Hi Howard,
So long as you haven't touched anything else, simply reinstalling the
package should restore your cluster. Debian packages only do
initialization if the data directories are missing.
David
On Thu, Mar 24, 2016 at 05:29:23PM +, Howard News wrote:
> Hi,
>
> I uninstalled the wrong
t-relational SQL database, version
> 9.1
> ii postgresql-9.2 9.2.15-1.pgdg1 object-relational SQL database, version
> 9.2
>
> Does this mean it is removed?
>
> On 24/03/2016 17:34, David Wilson wrote:
> >Hi Howard,
> >
> >So long as you haven't touched anything e
On Mon, Apr 25, 2016 at 03:18:30PM +1000, da...@andl.org wrote:
> It seems to me that it should be possible to create a language front
> end for Andl by creating a background worker and a language handler.
I've been reading your posts over the past few days and while I find it
fun to follow, I ca
On Wed, Oct 27, 2010 at 5:02 PM, Michael Clark wrote:
>
> while ( ((consume_result = PQconsumeInput(self.db)) == 1) &&
> ((is_busy_result = PQisBusy(self.db)) == 1) )
> ;
>
>
> The problem with this code is that it's effectively useless as a test.
You're just spinning in a loop; if you
I have a reasonably large table (~75m rows,~18gb) called "vals". It
includes an integer datestamp column with approximately 4000 unique
entries across the rows; there is a normal btree index on the
datestamp column. When I attempt something like "select distinct
datestamp from vals", however, expla
n the
vals table is a large (~98%) subset of the set of datestamps in the
sdays table. Of the approximately 4000 distinct datestamps, there are
80 that appear in the sdays table and not in the vals table.
On Sun, Apr 6, 2008 at 10:20 PM, Craig Ringer
<[EMAIL PROTECTED]> wrote:
> Davi
On Mon, Apr 7, 2008 at 2:05 AM, Alban Hertroys
<[EMAIL PROTECTED]> wrote:
> On Apr 7, 2008, at 1:32 AM, David Wilson wrote:
>
> >
>
> The databases estimates seem consistent with yours, so why is it doing
> this? Could you provide an EXPLAIN ANALYSE? It shows the a
On Mon, Apr 7, 2008 at 1:42 PM, Alban Hertroys
<[EMAIL PROTECTED]> wrote:
>
> Have you tried this query with enable_seqscan=off? If my guess is right
> (and the planners, in that case) it'd be even slower.
set enable_seqscan=off;
explain select distinct datestamp from vals;
On Mon, Apr 7, 2008 at 7:57 PM, Gregory Stark <[EMAIL PROTECTED]> wrote:
>
> You could try changing it to the equivalent GROUP BY query. The planner,
> unfortunately, doesn't know they're equivalent and has two separate sets of
> plans available. In this case where there are only 4,000 distinct
On Mon, Apr 7, 2008 at 9:11 PM, Stephen Denne
<[EMAIL PROTECTED]> wrote:
>
> You may be able to make use of an index by rearranging your query to
> generate a series between your min & max values, testing whether each value
> is in your table.
>
> You've got 4252 distinct values, but what is th
On Wed, Apr 9, 2008 at 3:29 AM, <[EMAIL PROTECTED]> wrote:
> But if i do "VACUUM ANALYZE" without concurrent queries, everything runs fine
> afterwards.
>
> If i run "VACUUM ANALYZE" with few concurrent queries, it slows down to a
> crawl.
>
> Could it be that something like this is happeni
On Mon, Apr 14, 2008 at 1:34 PM, Chris Browne <[EMAIL PROTECTED]> wrote:
> "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:
> > Another test. In postgres I added an index to the userid column and
> > then counted distinct userids. The average run time over three
> > queries was 4666 seconds, or
(I originally missed replying to all here; sorry about the duplicate,
Vance, but figured others might be interested.
On Wed, Apr 16, 2008 at 1:55 PM, Vance Maverick <[EMAIL PROTECTED]> wrote:
>
> Another approach would be to queue the log entries in a "staging" table,
> so that a single process
On Thu, Apr 17, 2008 at 1:55 PM, Roberts, Jon <[EMAIL PROTECTED]> wrote:
> Using the term "metadata" is misleading in your example. Metadata is
> data about data and in your example, you just have data. It is fine to
> join tables in an RDBMS.
>
I believe you missed the OP's actual goal. He's
On Mon, Apr 21, 2008 at 7:55 PM, Joris Dobbelsteen
<[EMAIL PROTECTED]> wrote:
>
> If you want to clean up the the staging table I have some concerns about
> the advisory lock. I think you mean exclusive table lock.
Either works, really. An advisory lock is really just a lock over
which you have c
On Tue, Apr 22, 2008 at 9:52 AM, Joris Dobbelsteen
<[EMAIL PROTECTED]> wrote:
>
> Describe the mechanism, because I don't really believe it yet. I think you
> need to do a advisory lock around every commit of every transaction that
> writes to the log table.
Consider some number of reader process
On Tue, Apr 22, 2008 at 2:48 PM, Joris Dobbelsteen
<[EMAIL PROTECTED]> wrote:
>
> Ah, yes, all visible rows...
> My point is that, unless you use a transaction with serializable isolation,
> this all visible rows for the second statement might be different from those
> that you copied into the lo
I have a fairly simple table (a dozen real/integer columns, a few
indexes, one foreign key reference) with ~120m rows. Periodically the
table is truncated or dropped and recreated and the data is
regenerated (slightly different data, of course, or the exercise would
be rather pointless). The regene
On Tue, Apr 22, 2008 at 4:38 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> The best bet is to issue an "analyze table" (with your table name in
> there, of course) and see if that helps. Quite often the real issue
> is that pgsql is using a method to insert rows when you have 10million
> of t
On Tue, Apr 22, 2008 at 5:04 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> Normally, after the first 50,000 or so the plan won't likely change
> due to a new analyze, so you could probably just analyze after 50k or
> so and get the same performance. If the problem is a bad plan for the
> inse
On Tue, Apr 22, 2008 at 5:18 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>
> Try upping your checkpoint segments. Some folks find fairly large
> numbers like 50 to 100 to be helpful. Each segment = 16Megs, so be
> sure not to run your system out of drive space while increasing it.
>
Ahh, muc
On Tue, Apr 22, 2008 at 6:10 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "David Wilson" <[EMAIL PROTECTED]> writes:
>
> Are you loading any tables that are the targets of foreign key
> references from other tables being loaded? If so, I'd bet on
> Scott&
On Tue, Apr 22, 2008 at 7:33 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> What have you got shared_buffers set to? If it's not enough to cover
> the working set for your indexes, that might be the (other) problem.
>
shared_buffers = 1536MB
Is there a way to get the size of a specific index, on
Thanks for the help from everyone on this. Further investigation with
the suggested statistics and correlating that with some IO graphs
pretty much nailed the problem down to checkpoint IO holding things
up, and tuning the checkpoint segments and completion target (128 and
0.9 seemed to be the best
> Maybe I'm some crazy, radical DBA, but I've never had a version of
> pgsql get EOLed out from underneath me. I migrated from 7.4 to 8.1
> right around the time 8.2 came out then upgraded to 8.2 around 6
> months later.
>
> Where I work now we are looking at migrating from 8.1 to 8.2 or 8.3
select class, size from X t1 where size in (select size from X t2
where t2.class=t1.class order by size desc limit 5);
On Tue, May 6, 2008 at 1:22 PM, Kynn Jones <[EMAIL PROTECTED]> wrote:
> Suppose table X has two columns: class (TEXT) and size (INT). I want a
> listing showing the (up to) 5 la
On Wed, May 7, 2008 at 4:02 PM, John Smith <[EMAIL PROTECTED]> wrote:
> Does anyone see a problem with this approach (e.g., correctness,
> performance, etc.)? Or is there perhaps an alternative approach using
> some other postgresql mechanism that I'm not aware of?
Did you already look at and
On Wed, May 7, 2008 at 7:06 PM, Toby Chavez <[EMAIL PROTECTED]> wrote:
> My custom type needs to have one biginteger and one text value... pretty
> straight forward. I guess my first question is if there are any examples out
> there that do something similar. I have looked extensively through the
On Tue, May 13, 2008 at 5:43 PM, Nathan Thatcher <[EMAIL PROTECTED]> wrote:
> Is this the correct way to do this, or is there a better way / a way
> to get PostgreSQL to recognize an alias in the HAVING clause?
As Tom pointed out, f1's not in scope for the HAVING clause. If you're
that concerned a
On Wed, Jun 11, 2008 at 5:54 PM, David Lambert <[EMAIL PROTECTED]> wrote:
> I am trying to find a way to figure out what offset I would have to use in a
> SELECT with LIMIT and OFFSET clauses to get a grouping of records. For
> example:
>
> Consider a table full of first names. I want to be able
On Thu, Jun 19, 2008 at 7:54 AM, Steve Clark <[EMAIL PROTECTED]> wrote:
> I am not sure that will do what I want. As an example
> suppose I have 5 rows and the idfield is 1,2,3,4,5
> now row 1 is updated, not the idfield but another column, then row 3 is
> deleted.
> Now I would like to renumber
On Mon, Jun 8, 2009 at 11:04 AM, BlackMage wrote:
> Is there a field that can actually store time in PostgreSql. And what I mean
> by time is not like time of the day, but a specific amount of time.
The manual is your friend:
http://www.postgresql.org/docs/8.3/static/datatype-datetime.html
The
On Mon, Jun 15, 2009 at 2:23 PM, Mike Toews wrote:
> Hi,
>
> I'm having difficulty constructing a query that will find breaks where data
> change in a time-series. I've done some searching for this too, but I
> haven't found anything.
>
> Here is my example situation, consider my source table:
> da
On Wed, Jun 24, 2009 at 12:48 AM, Scott Marlowe wrote:
> Maybe. Rows that are updated often are NOT generally pgsql's strong
> suit, but IF you're running 8.3 or above, and IF you have a low enough
> fill factor that there's empty space for the updates and IF the fields
> you are updating are not
On Tue, Jul 14, 2009 at 7:47 PM, Jim Michaels wrote:
> could somebody rewrite pg_dumpall and pg_dump so that it makes editable
> dumps?
> most programmer's text editors can't handle more than 2000 characters per
> line.
> and I want to be able to edit my dumps.
You're going to have better luck fin
On Mon, Jul 27, 2009 at 7:52 PM, Janet Jacobsen wrote:
> Can you suggest other strategies?
Something that might be easier to play with is to create a (or
several, to speed up other queries) functional index on the comparison
between rbscore and the cutoff. It won't buy you anything on seq
scans,
On Mon, Jul 27, 2009 at 8:24 PM, Greg Stark wrote:
> I think it would be even more interesting to have partial indexes --
> ie specified with "WHERE rbscore < cutoff".
Yes- that's what I actually meant. Word got scrambled between brain
and fingers...
--
- David T. Wilson
david.t.wil...@gmail.co
On Tue, Aug 11, 2009 at 3:35 AM, Juan Backson wrote:
> PGRES_COMMAND_OK
You want PGRES_TUPLES_OK for a select statement. You're not getting an
actual failure- you're checking for the wrong status.
--
- David T. Wilson
david.t.wil...@gmail.com
--
Sent via pgsql-general mailing list (pgsql-gener
On Thu, Oct 1, 2009 at 10:04 PM, Chris wrote:
> Tim Uckun wrote:
>
> If this is a common query you could probably do a multi-column index on all
> 3 columns (id, company_id, source_model_name) - but if company_id and
> source_model_name have a low number of distinct values, then it's not going
>
On Thu, Oct 1, 2009 at 10:21 PM, Tim Uckun wrote:
>
> Interesting. I would have thought the order of the fields would not
> matter. I don't have to rewrite the query do I?
>
>
No. For multi-column indices, however, postgres can, starting at the
leftmost in the index, use as many columns as match
On Wed, May 26, 2010 at 4:29 PM, John Gage wrote:
> Please forgive this intrusion, and please ignore it, but how many
> applications out there have 110,000,000 row tables? I recently multiplied
> 85,000 by 1,400 and said now way Jose.
>
>
I've got a DB with two 500,000,000+ row tables. Performan
On Mon, Jun 14, 2010 at 5:24 AM, Leonardo F wrote:
> > For "inserts" I do not see the reason
> > why
> > it would be better to use index partitioning because AFAIK
> > b-tree
> > would behave exactly the same in both cases.
>
> no, when the index gets very big inserting random values gets
> very
On Mon, Jun 14, 2010 at 8:38 AM, Peter Hunsberger <
peter.hunsber...@gmail.com> wrote:
>
>
> Can you define acceptable? IIRC the OP is looking for 20,000+ inserts /
> sec.
>
>
> He's actually only looking for 2k inserts/sec. With a battery backed
controller I can sustain that, yes. That's also on
On Wed, Sep 15, 2010 at 1:36 PM, Carlos Mennens wrote:
> Thanks but then I am confused why I am getting the following:
>
You initially connected to the postgres database because you were logging in
as the postgres user...
>
> postgres=# \d
> No relations found.
> postgres=# \dt
> No relations f
On Wed, Sep 22, 2010 at 8:30 PM, Adrian Klaver wrote:
>
> From here;
>
> http://www.postgresql.org/docs/8.4/interactive/datatype-boolean.html
>
>
>
I believe the question relates to changing the string *output* of the
database to something other than 't' and 'f', not an issue with input
formats.
List-
I've got a calculation that, due to its use of exp(), is produces double
precision intermediate results which I ultimately need to cast back to real
to store in a table column. However, with certain inputs, this results in an
underflow error. Is there any way outside of an exception-trapping
My application is adding a bytea column to a table into which data is
dumped in approximately 4k row batches, one batch approximately every
10 seconds. To this point, those dumps have used copy from stdin;
however, I'm having some difficulty getting bytea encodings to work
with it. Specifically, I
On Mon, Jul 28, 2008 at 1:24 AM, Klint Gore <[EMAIL PROTECTED]> wrote:
> Try just a single \
>
> e.g.
> "ge.xls","application/vnd.ms-excel",71168,"\320\317\021\340\241[snip]
Thanks- I did try that, and it at least gave the expected output from
select, but is there a way to verify that it's actuall
On Tue, Jul 29, 2008 at 5:42 PM, Bob Pawley <[EMAIL PROTECTED]> wrote:
> begin
Don't you need a ; after your begin...?
--
- David T. Wilson
[EMAIL PROTECTED]
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/m
On Fri, Aug 1, 2008 at 10:16 AM, Ivan Sergio Borgonovo
<[EMAIL PROTECTED]> wrote:
> what's the use of column in the COPY FROM syntax if I get:
>
> ERROR: extra data after last expected column
>
> I've read:
> http://bytes.com/forum/thread424089.html
>
> but then... is there any actual use?
>
I us
On Fri, Aug 1, 2008 at 4:07 PM, Alex Vinogradovs
<[EMAIL PROTECTED]> wrote:
> By loading in bulks, I mean I load some 40-50 thousand
> rows at once into a table that already has some millions.
> Index rebuild on that table after each 50k inserts will
> be even less efficient ;)
How many indexes do
On Fri, Aug 1, 2008 at 4:16 PM, David Wilson <[EMAIL PROTECTED]> wrote:
> On Fri, Aug 1, 2008 at 4:07 PM, Alex Vinogradovs
> <[EMAIL PROTECTED]> wrote:
>> By loading in bulks, I mean I load some 40-50 thousand
>> rows at once into a table that already has some mill
On Mon, Aug 4, 2008 at 2:08 PM, Rajarshi Guha <[EMAIL PROTECTED]> wrote:
> select count(aid) where cid = 123 and cid = 456;
>
> but I was wondering whether I could construct a single SQL statement to do
> this.
>
> Any pointers would be appreciated,
Typed into gmail, so may need some tweaking, bu
On Fri, Aug 15, 2008 at 11:42 PM, Amber <[EMAIL PROTECTED]> wrote:
> Dear all:
>We are currently considering using PostgreSQL to host a read only
> warehouse, we would like to get some experiences, best practices and
> performance metrics from the user community, following is the question lis
On Fri, Sep 5, 2008 at 2:52 AM, Subspace god <[EMAIL PROTECTED]> wrote:
> The following does not work in C++, executed as above, in another session
> using two subsequent calls to PQexecParams
>
> PREPARE myquery (text) AS INSERT INTO myTable (word) VALUES ($1);
> EXECUTE myquery($1::text);
You'r
On Thu, Sep 11, 2008 at 3:19 PM, Markova, Nina <[EMAIL PROTECTED]> wrote:
> Is there a psql or other command that I can use to list tables and their
> rows? All I found is this:
> http://archives.postgresql.org/pgsql-hackers/2004-09/msg00876.php
select tablename,reltuples from pg_class inner join
On Tue, Oct 7, 2008 at 7:30 PM, Saptarshi Guha <[EMAIL PROTECTED]> wrote:
> Q: How efficient is this?
If connectionHash is unique or near to it, and everything's indexed...
it could work for reasonable K values depending on your performance
requirements.
> Every time i create a new table should
On Thu, Oct 9, 2008 at 1:48 PM, Bill Thoen <[EMAIL PROTECTED]> wrote:
> No, this won't work. Here's an example of my tables:
> Table1
> 1, 12, A
> 2, 16, B
> 8, 6, A
> 19, 9, C
>
> Table2
> 1, 13, D
> 2, 16, B
> 8, 6, B
> 12, 5, A
select * from table1
union
select table2.* from table2 left join ta
On Thu, Oct 9, 2008 at 3:31 PM, Bill Thoen <[EMAIL PROTECTED]> wrote:
> Thanks, but that didn't work. That selected only the records from table1.
That's why I warned you about it being written in gmail. :)
select * from table1
union
select table2.* from table2 left join table1 on table2.a=table1
After dropping an index to do some full-table updating, I'm running
into an out of memory issue recreating one of my indices. This is on
8.3 running on linux.
The table in question has about 300m rows. The index is on a single
integer column. There are approximately 4000 unique values among the
ro
On Mon, Oct 13, 2008 at 6:44 AM, Gregory Stark <[EMAIL PROTECTED]> wrote:
> How much memory the OS allows Postgres to allocate will depend on a lot of
> external factors. At a guess you had some other services or queries running at
> the same time the first time which reduced the available memory.
On Tue, Oct 28, 2008 at 7:10 PM, Christophe <[EMAIL PROTECTED]> wrote:
>
> On Oct 28, 2008, at 4:03 PM, Steve Atkins wrote:
>>
>> Installing from source means I can avoid the fragility of macports or
>> fink, and know that I've built it in much the same way as the postgresql or
>> solaris installat
On Tue, Nov 4, 2008 at 2:12 PM, Webb Sprague <[EMAIL PROTECTED]> wrote:
> Can't do that. (Or I wouldn't have asked the question.) Need a WWW
> interface, period.
A WWW interface doesn't preclude the suggestion of simply relying on
permissions to maintain safety and providing what amounts to a q
On Thu, Nov 13, 2008 at 2:03 PM, <[EMAIL PROTECTED]> wrote:
> I have several tables that when I run VACUUM FULL on, they are under 200k,
> but after a day of records getting added they grow to 10 to 20 megabytes.
> They get new inserts and a small number of deletes and updates.
>
> seq_scan
On Sat, Nov 22, 2008 at 4:54 PM, Ciprian Dorin Craciun
<[EMAIL PROTECTED]> wrote:
> On Sat, Nov 22, 2008 at 11:51 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>> On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
>> <[EMAIL PROTECTED]> wrote:
>>>
>>>Hello all!
>> SNIP
>>>So I would conc
On Sat, Jan 3, 2009 at 5:59 PM, Jolles, Peter M (GE Infra, Energy)
wrote:
> For example, on these days, 4/7/02 2:00 AM imports to 4/7/02 3:00 AM.
> 4/6/03 2:15 AM imports as 4/6/03 3:15 AM, etc. All other dates and times
> do not give any errors. I have tried to extract the date and create a
> t
On Fri, Jan 16, 2009 at 3:27 PM, Jason Long
wrote:
> I just tried it by sending text only instead of text and html. We will see
> if it goes through this time.
> Other than that do you see anything weird about my email?
Still nothing. Do you have webspace you could place it on? If not, you
coul
On Wed, Feb 18, 2009 at 9:47 AM, Joey Morris wrote:
> I expected this program to print "connecting", but in fact I get no output
> whatsoever. If I comment out the PQconnectdb and PQfinish lines, I see
> "connecting" as expected. What could be going on here?
Try adding "fflush(stderr);" after yo
On Mon, Mar 23, 2009 at 3:07 AM, Scott Marlowe wrote:
> Are you saying pg_quer_params is MORE effective than pg_escape_string
> at deflecting SQL injection attacks?
pg_query_params() will protect non-strings. For instance, read a
number in from user input and do something of the form " and
foo=$
On Tue, Mar 31, 2009 at 9:18 PM, Aleksey Tsalolikhin
wrote:
> Could not create directory '/home/postgres/.ssh'.
> Host key verification failed.
Have you tested "ssh node2" as the postgres user with SELinux enabled?
This looks like ssh failing to access the .ssh directory where it
keeps host keys
On Wed, Apr 8, 2009 at 12:24 PM, Jennifer Trey wrote:
> I think I might have misunderstood the effective cache size. Its measured in
> 8kB blocks. So the old number 449697 equals 3.5 GB, which is quite much.
> Should I lower this? I had plans to use 2.75GB max. Can I put 2.75GB there?
> Should I
On Mon, Apr 20, 2009 at 7:39 PM, jc_mich wrote:
>
> You've understood very well my problem, but also this query works as worse
> than everything I did before, it throws as many rows as rows are contained
> my tables clients and stores. I only want to find for every client what
> store is closer to
On Mon, May 18, 2009 at 3:13 PM, Joshua Berry wrote:
> Any hints/tips? Is our original solution okay, or is there something we can
> do to improve things?
It seems as if you don't really care about the results of the query-
just whether or not it returns any rows. In that case, why not
something
On Wed, May 20, 2009 at 4:41 PM, Sergey Samokhin wrote:
> Is there a way to do #1 (it's a bit cleaner and simpler to implement)?
You could wrap the whole thing in a transaction. If you don't get to
the commit, the whole transaction should roll back.
--
- David T. Wilson
david.t.wil...@gmail.co
76 matches
Mail list logo