On Sun, Aug 6, 2017 at 2:43 PM, Karsten Hilbert wrote:
>
> Yes. Been there done that.
>
> Karsten
>
Thanks Karsten, it worked!
Daniel
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> I'm updating my database from 9.4 to 9.6 (Debian Jessie to Stretch). I
> think that it is a good opportunity to turn on data checksum.
>
> I don't have experience with cluster creation or moving a DB to a new cluster.
>
> I'll use pg_upgradecluster, but I don't see any option to turn of data
>
Hi,
I'm updating my database from 9.4 to 9.6 (Debian Jessie to Stretch). I
think that it is a good opportunity to turn on data checksum.
I don't have experience with cluster creation or moving a DB to a new cluster.
I'll use pg_upgradecluster, but I don't see any option to turn of data checksum.
through physical backup.
Check your contract with your vendor, file a suie if penalty term exists.
steven
原始訊息 自: chris faber 日期: 2017/7/12 22:27
(GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] DATA Integrity &
Recovery
Postgres 9.2
We have a POSTGRES database
chris faber wrote:
> Postgres 9.2
>
> We have a POSTGRES database that we have been backing up via Incremental
> backups.
You are talking of a physical base backup and WAL archives, right?
> We had an incident where we had to recover from backup. Our software vendor
> has completed
> a restore
On Wed, 12 Jul 2017, chris faber wrote:
I would appreciate the communities help in the following:
1. Determine if data from the incremental backups can be restored or
recovered.
2. Determine if data can be recovered from individual files backed up from
main Postgres data directory.
Chris,
Postgres 9.2
We have a POSTGRES database that we have been backing up via Incremental
backups.
We had an incident where we had to recover from backup. Our software vendor
has completed a restore and we have lost 10 days of data. There is no
explanation as to the reason we have sustained this loss
9.5 both
But the enable always trigger I missed that
Once that set it runs
Thank you for your help
Armand
On May 9, 2017, at 8:26 AM, Adrian Klaver wrote:
> On 05/08/2017 08:31 PM, Armand Pirvu (home) wrote:
>> My bad
>> db1 I have two tables t1 and t2 (or more)
>> db2 has one table t3 fo
On 05/08/2017 08:31 PM, Armand Pirvu (home) wrote:
My bad
db1 I have two tables t1 and t2 (or more)
db2 has one table t3 for example which can get data aggregated from one
or more multiple tables from the above set . I can
updates/inserts/deletes in db1.t1 and/or db1.t2 which combined may m
My bad
db1 I have two tables t1 and t2 (or more)
db2 has one table t3 for example which can get data aggregated from one or more
multiple tables from the above set . I can updates/inserts/deletes in db1.t1
and/or db1.t2 which combined may mean related data in db.t3 would need to be
inserted/
On 05/08/2017 12:46 PM, Armand Pirvu (home) wrote:
Hi
Here it is a scenario which I am faced with and I am hoping to find a
pointer/tip/help
db1 is the OLTP system
db2 is the Reporting system
The data from db1 needs to get to db2, but the database on those two have
tables with different lay
Hi
Here it is a scenario which I am faced with and I am hoping to find a
pointer/tip/help
db1 is the OLTP system
db2 is the Reporting system
The data from db1 needs to get to db2, but the database on those two have
tables with different layout/structure and hence data will need to suffer som
On 02/01/2017 01:35 PM, Greg Slawek wrote:
>
> Can anyone recommend a data modeling tool (preferably Mac OSX compatible)?
>
> I would like to be sure it can export version specific SQL code (ie 9.1 vs
> 9.4)
>
I'm very happy with pgmodeler, which I picked up about 6-8 weeks ago. It's an
open
Greg Slawek schrieb am 01.02.2017 um 19:35:
Can anyone recommend a data modeling tool (preferably Mac OSX
compatible)?
I would like to be sure it can export version specific SQL code (ie
9.1 vs 9.4)
I have used Toad Data Modeler years ago on Windows, which was pretty
good at sticking to the di
Can anyone recommend a data modeling tool (preferably Mac OSX compatible)?
I would like to be sure it can export version specific SQL code (ie 9.1 vs
9.4)
I have used Toad Data Modeler years ago on Windows, which was pretty good
at sticking to the differences in each postgres version. I've seen N
On 06/14/2016 10:20 PM, Rupesh Choudhary wrote:
Hi Adrian,
Please find anwer in line :
What is the SQL you are running?
-We have a table with about 167 columns and we are trying to run batch
updates on the table . I am attaching the part of code for your reference .
How is that being run in the
Hi Adrian,
Please find anwer in line :
What is the SQL you are running?
-We have a table with about 167 columns and we are trying to run batch
updates on the table . I am attaching the part of code for your reference .
How is that being run in the code?
Yes , it is runs as a part of the code .
D
On 06/14/2016 09:33 AM, Rupesh Choudhary wrote:
Hi Team ,
We have been trying to insert data (batch inserts) using a JAVA app
which uses prepared statements with parameterized queries to insert
data .We are able to insert data with small batch size however when we
increase the batch size to a
Hi Team ,
We have been trying to insert data (batch inserts) using a JAVA app which
uses prepared statements with parameterized queries to insert data .We are
able to insert data with small batch size however when we increase the
batch size to a higher value data ingestion fails . We are using
Kevin,
It is understood. But doesn't change things from my perspective. It would
anyway be user, with sole access, nobody can elevate privileges to and only
root can su to. The name, in that case, is irrelevant, but answer postgres
gave me idea of the protection level here.
Thanks,
Oleg
On Wed,
On Wed, Jan 13, 2016 at 3:54 PM, oleg yusim wrote:
> Answer "postgres" would suffice.
But the user would not always be "postgres". To be accurate, it is
the user which owns the files for the "cluster" (database instance)
and which runs the database service. If a machine contains
multiple clust
David,
Answer "postgres" would suffice. I have fairly decent idea of what happens
below that (you are right system utility - Memory Manager is what
marshaling data back and forth to RAM, abstracting absolute addresses from
application along the way, and once in RAM, security kernel of the system
p
On Wed, Jan 13, 2016 at 2:37 PM, oleg yusim wrote:
> OK, Kevin, David,
>
> Thanks you very much for explanation. Now who is the owner of this
> process? My understanding is, data then located physically in RAM, in the
> memory stack assigned by OS to this process. Now the question is who owns
> t
OK, Kevin, David,
Thanks you very much for explanation. Now who is the owner of this process?
My understanding is, data then located physically in RAM, in the memory
stack assigned by OS to this process. Now the question is who owns the
process?
Thanks,
Oleg
On Wed, Jan 13, 2016 at 3:29 PM, Dav
On Wed, Jan 13, 2016 at 2:19 PM, Kevin Grittner wrote:
> On Wed, Jan 13, 2016 at 2:57 PM, oleg yusim wrote:
>
> > Say, I got network package. The package was decrypted by OpenSSL. Where
> this
> > data are, physically, at this moment?
>
> Process-local memory for the PostgreSQL backend process a
On Wed, Jan 13, 2016 at 2:57 PM, oleg yusim wrote:
> Say, I got network package. The package was decrypted by OpenSSL. Where this
> data are, physically, at this moment?
Process-local memory for the PostgreSQL backend process associated
with the database connection.
> Opposite situation: we are
Kevin,
Thank you for your reply. I understand what you are saying, but I guess I
need a bit deeper understanding for my assessment. Let's dive a bit here:
Say, I got network package. The package was decrypted by OpenSSL. Where
this data are, physically, at this moment?
Opposite situation: we are
On Tue, Jan 12, 2016 at 10:00 PM, oleg yusim wrote:
> Important: let's assume data at rest is encrypted using EFS and data at
> transit is encrypted using ciphers, provided by OpenSSL.
>
> So, with that in mind, please, help me to understand movement and location
> of the data between the moment
Greetings,
I have matching couple of security requirements, speaking about preserving
data confidentiality and integrity in PostgreSQL DB during packaging for
transmission / unpacking from transmission.
Important: let's assume data at rest is encrypted using EFS and data at
transit is encrypted u
We appear to have had some corruption on a customer's postgres cluster.
They are on 9.0.17 32bit
Windows Server 2003 - Service pack 2
Intel Xeon 2.66GHZ
4GB Memory
Raid is setup but doesn't look good - just now showing status of Degraded!!
The RAID doesn't look too well currently has status De
Hey Shaun,
Thanks for quick reply
We have not changed the default_statistics_target, so it shall remain to
its default value 100.
I would like to share our observation on this if you can infer anything
from it. : It started using query plan 1 after a our scheduled vacuum run
and it continued to
Krit,
It sounds like you might be hitting edge-case statistics problems on recent
data. We've had issues with this before, ourselves. I'd personally recommend
increasing your default_statistics_target to 400 for your entire database in
general. But it's possible that won't help this. Recent dat
Hey Shaun,
Hope you are keeping well. Will you please help me with this un-common
behaviour of postgres. When executing the query (1) it takes around 62
seconds. However when disabling the index scan the same query (2) executes
in around 2 seconds. Is there any reason why the query planner prefers
Hi Krit,
It looks like your actual problem is here:
> Index Scan using t1_orderid_creationtime_idx on t1
> (cost=0.43..1181104.36 rows=9879754 width=158)
> (actual time=0.021..60830.724 rows=2416614 loops=1
This index scan estimates 9.8M rows, and had to touch 2.4M. The issue is that
your LI
A full dump and restore would definitely help. I tend not to suggest
that often because I work with very large databases that are usually
extremely cumbersome to dump and restore.
But yeah, if you can get a successful pg_dump from your database, a
restore should obviously clean up all of you
Hi Shaun,
We reindexed all the primary and unique keys of all the tables, But we
did not reindex the tables. You think we should do that also ?
Yes, you need to reindex. Part of the problem with this kind of table
corruption, is that PostgreSQL has applied data and index page
modification
Thanks Shaun.
We reindexed all the primary and unique keys of all the tables, But we
did not reindex the tables. You think we should do that also ?
Also, do you think we should do a clean dump restore to eliminate all
data inconsistencies.
One more query :
We managed to get the old server
On 06/26/2014 10:34 AM, Karthik Iyer wrote:
Any inputs here? You think a pgdump and restore would help more ?
A full dump and restore would definitely help. I tend not to suggest
that often because I work with very large databases that are usually
extremely cumbersome to dump and restore.
On 06/26/2014 09:44 AM, Karthik Iyer wrote:
We reindexed all the primary and unique keys of all the tables, But we
did not reindex the tables. You think we should do that also ?
Yes, you need to reindex. Part of the problem with this kind of table
corruption, is that PostgreSQL has applied d
On 06/25/2014 06:29 AM, Karthik Iyer wrote:
[2] We also have a daemon process which copies the latest partial WAL
log file (which postgres is currently writing to, under pg_xlog/) every
3 secs to a different location.
No. No, no, no, no no. No.
Also, no. Partial WAL files are not valid for r
Hello,
We are facing issues of some DB inconsistencies, while promoting the
slave to master.
[1] We have master-slave with Hot Standby Setup (log shipping).
[2] We also have a daemon process which copies the latest partial WAL
log file (which postgres is currently writing to, under pg_xlog
2014-05-24 8:54 GMT-03:00 Jack Douglas :
>
> If I’m using the Data Checksum feature (new to 9.3:
> https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.3#Data_Checksums),
> and in the event of a checksum failure on a replication master, will that
> corrupt data replicate to slaves or will
I posted this question on dba.se (http://dba.stackexchange.com/q/65821/1396)
and was advised to try asking here.
If I'm using the Data Checksum feature (new to 9.3:
https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.3#Data_Checksu
ms), and in the event of a checksum failure on a repl
Postgresql 9.2 streaming replication which is very much similar with Oracle
Data Guard.
On Fri, Dec 13, 2013 at 1:57 PM, Roy Anderson wrote:
> Good day. I am inquiring as to whether there is a free solution
> available that approximates, in Postgres, what Data Guard does for
> Oracle DB. Can any
Good day. I am inquiring as to whether there is a free solution
available that approximates, in Postgres, what Data Guard does for
Oracle DB. Can anyone advise?
Thank you,
Roy
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www
i need to do clustering, association rules and classification... i tried with
weka and rapidminer but i need "true" relational dm tool
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Data-Minning-and-analisys-tool-for-Postgres-tp5760818p5760963.html
Sent from the Pos
On Tue, Jun 25, 2013 at 10:46 AM, kathyn wrote:
> Hello,
>
> I am researching about a Data Minning and analisys tool for Postgres (or
> even Mysql).
>
> Do you know any tool for relational data mining?
>
What exactly are your goals here exactly?
If there is a specific goal you are trying to achi
Hello,
I am researching about a Data Minning and analisys tool for Postgres (or
even Mysql).
Do you know any tool for relational data mining?
Regards.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Data-Minning-and-analisys-tool-for-Postgres-tp5760818.html
Sent fr
Hi Tom,
Thank you for your reply.
I checked and found some weird behavior: We have 32K blocksize.
The incorrect blocks started at block 41 and span over 115 blocks (41 to
155), at the exact block boundary of 32K. Then at 156th block, we have both
tuples with correct number of attributes as well as
PG User writes:
> ... Later we found that invalid header is actually valid header for other
> table's data file. So somehow data of one table got inserted into another
> and both tables have different # of attributes. Can this be possible? Any
> hardware issue can cause this?
There are (at least)
Hi All,
We are facing one strange problem about data file corruptions.
We have many postgres databases. At some point, one simple query on one
database started crashing back-end.
The query is
select count(*), col1 from tab1 group by col1;
After using pg_filedump (http://pgfoundry.org/projects
On Mar 30, 2013, at 3:38, Modulok wrote:
> List,
>
> I have a data modeling problem. That much, I know. The question is how do I
> model this? (Below.)
>
> I'm making a database which will store pseudo-genetic data. It's a basic
> many-to-many setup::
>
>create table person(
>name
Hi,
If I have understood well
You have two options:
1. Person_gene (person_id pk, gene_id int)
Genes (gene_id serial pk, gene_type int)
Then for each gene_type separate table with gene_id int what is pk and
relates to genes... And other columns what describe that type...
Person_
> Everything should have an id, and combination tables like person-gene should
> have the person id + the gene id.
> You might look at this tool to make your life easier:
> https://github.com/pgmodeler/pgmodeler
>
> First, read up on this:
> http://en.wikipedia.org/wiki/Entity%E2%80%93relationship_
, March 29, 2013 11:45 PM
To: 'Modulok'; pgsql-general@postgresql.org
Subject: RE: [GENERAL] data modeling genes and alleles... help!
Everything should have an id, and combination tables like person-gene should
have the person id + the gene id.
You might look at this tool to make your l
, it will be obvious how to use the tool.
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Modulok
Sent: Friday, March 29, 2013 7:39 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] data modeling genes and alleles
List,
I have a data modeling problem. That much, I know. The question is how do I
model this? (Below.)
I'm making a database which will store pseudo-genetic data. It's a basic
many-to-many setup::
create table person(
name varchar(32) primary key
);
create table gene(
Greetings,
I'm running postgres-9.2.2 in a Linux-x86_64 cluster with 1 master and
several hot standby servers. Since upgrading to 9.2.2 from 9.1.x a
few months ago, I switched from generating a base backup on the
master, to generating it on a dedicated slave/standby (to reduce the
load on the mast
On Mon, Dec 17, 2012 at 2:38 PM, Kirk Wythers wrote:
>
> I seem to be dealing with a data type issue when I try and import data
> into a new and empty database.
>
> Error Message: ERROR: date/time field value out of range:
> "1332471600:00:00"
> LINE 1: ...tc_avg25") values ('2012-03-22 21:00:00_
I seem to be dealing with a data type issue when I try and import data into a
new and empty database.
Error Message: ERROR: date/time field value out of range: "1332471600:00:00"
LINE 1: ...tc_avg25") values ('2012-03-22 21:00:00_B4WARM_A','133247160…
Here are the first two rows in the files I
On 12/17/2012 05:46 PM, Adrian Klaver wrote:
On 12/17/2012 05:30 PM, Kirk Wythers wrote:
Thanks.
Is there a way to use the float4 data type for a variable when empty
records contain "NA", instead of nothing? As you can see below the
variable "A_TC_AVG1" has the first record filled with "NA" sig
On 12/17/2012 05:30 PM, Kirk Wythers wrote:
Thanks.
Is there a way to use the float4 data type for a variable when empty
records contain "NA", instead of nothing? As you can see below the
variable "A_TC_AVG1" has the first record filled with "NA" signifying
that the record is empty? Records that
Thanks.
Is there a way to use the float4 data type for a variable when empty records
contain "NA", instead of nothing? As you can see below the variable "A_TC_AVG1"
has the first record filled with "NA" signifying that the record is empty?
Records that contain values will have decimal values l
On 12/17/2012 01:53 PM, Kirk Wythers wrote:
I seem to be dealing with a data type issue when I try and import data
into a new and empty database.
Error Message: ERROR: date/time field value out of range:
"1332471600:00:00"
LINE 1: ...tc_avg25") values ('2012-03-22 21:00:00_B4WARM_A','133247160
I seem to be dealing with a data type issue when I try and import data into a
new and empty database.
Error Message: ERROR: date/time field value out of range: "1332471600:00:00"
LINE 1: ...tc_avg25") values ('2012-03-22 21:00:00_B4WARM_A','133247160…
Here are the first two rows in the files
Hi,
I'm using Dell DVD store for training purposes, and I met some problems
with it!
Once they are corrected it works well (except the load test config on my
environment, problem encountered with a RSA fingerprint!)
The following slideshow tracks down the problems:
http://jkshah.blogspot.fr/2012/
>Have a look a Benerator. It can create quite reasonable test data (e.g. valid
>addresses, "real" looking names and so on).
>
>It has a bit steep learning curve, but I'm quite happy with the results
>http://databene.org/databene-benerator
>
>
>Another option might be the Dell DVD Store Loadtest:
>h
Jayadevan M, 25.10.2012 05:15:
There are tools which generate data, but most of them seem to
generate junk data.
Have a look a Benerator. It can create quite reasonable test data (e.g. valid addresses,
"real" looking names and so on).
It has a bit steep learning curve, but I'm quite happy wit
Hello all,
Does anyone know of reasonably-sized data dumps (csv or excel or xml..) that
can be used for learning/teaching about performance tuning. Say - a set of 6-7
tables, may be two of them with a few million records etc? Total data volume
would be in a few GB range. There are tools which ge
Check this post on depesz.com:
http://www.depesz.com/2012/04/04/lets-talk-dirty/
2012/9/18 Craig Ringer
> Also, are there any functions to read raw tuple bytes to `RECORD's? I
> couldn't find any in `pageinspect', `pgstattuple', `adminpack', etc. Am I
> right in guessing that they're pretty much
Hi all
Someone on SO has UPDATEd away some data they cared about. I've written
a bit on what they might want to do, but really don't know enough, so I
was hoping for some aid/ideas. In particular, is the advice to use "
pg_ctl stop -m immediate" (to hopefully avoid a checkpoint) appropriate
a
On 07/20/2012 05:54 PM, Roshan Saxena wrote:
Yes it is connect and insertion and udation and deletion is taking
place normally but the proble of table drop occurs when I make any
changes in my models in play
Please reply to the mailing list (reply all) not to me directly.
If Play framework is
On 07/19/2012 03:19 AM, rajonr...@gmail.com wrote:
I am creating one application using play in the models I had created a table
and everything is working fine except when i make changes in any of the
object(fields) all the previous data in the database is getting dropped how to
recover that
On 07/18/2012 12:19 PM, rajonr...@gmail.com wrote:
I am creating one application using play in the models I had created a table
and everything is working fine except when i make changes in any of the
object(fields) all the previous data in the database is getting dropped how to
recover that
T
I am creating one application using play in the models I had created a table
and everything is working fine except when i make changes in any of the
object(fields) all the previous data in the database is getting dropped how to
recover that
--
Sent via pgsql-general mailing list (pgsql-general
AM
To: PostgreSQL
Subject: Re: [GENERAL] Data Type for Money
On Fri, Dec 30, 2011 at 12:46 PM, Adrian Klaver wrote:
> My guess is it is listed as numeric which is equivalent to decimal:
>
> http://www.postgresql.org/docs/9.1/interactive/datatype-numeric.html
Thanks. I just for some reason
Carlos Mennens writes:
> Thanks. I just for some reason can't see or understand the difference
> between 'decimal' & 'numeric'. Why have two data types for the same
> values? Am I missing something?
There isn't any difference, in Postgres. There are two type names
because the SQL standard requir
On Fri, Dec 30, 2011 at 12:46 PM, Adrian Klaver wrote:
> My guess is it is listed as numeric which is equivalent to decimal:
>
> http://www.postgresql.org/docs/9.1/interactive/datatype-numeric.html
Thanks. I just for some reason can't see or understand the difference
between 'decimal' & 'numeric'
On Friday, December 30, 2011 9:41:18 am Carlos Mennens wrote:
> I'm trying to understand what is the recommended data type for $ in
> PostgreSQL. I've done some research and from what I've gathered, there
> are a few options:
>
> 1. decimal
> 2. money
>
> I've read the 'money' data type is non-st
I'm trying to understand what is the recommended data type for $ in
PostgreSQL. I've done some research and from what I've gathered, there
are a few options:
1. decimal
2. money
I've read the 'money' data type is non-standard and I should avoid
using this. I see it a bunch of Microsoft SQL Server
On 22/06/11 16:38, Szymon Guz wrote:
> Hi,
> I've got a lot of databases, unfortunately applications need to get a
> lot of data (a lot of means too much).
> The whole request takes about 3s, while pure explain analyze only 250ms.
> The amount of downloaded data cannot be changed.
What client ar
http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#comp
That applies in general to SSL apps.
In cryptography it is always recommended, and sometimes even mandatory
to compress data before encryption. This reduces the risk of finding
patterns, etc.
And SSL includes that option as well.
But that's
2011/6/22 Grzegorz Jaśkiewicz :
> The answer is: SSL. SSL will compress things, before encrypting
> (depends on setup obviously).
> As far as I know, postgresql it self doesn't compress any data over the wire.
Could you elaborate a bit on using SSL for that purpose? How do you
set it up for compr
The answer is: SSL. SSL will compress things, before encrypting
(depends on setup obviously).
As far as I know, postgresql it self doesn't compress any data over the wire.
--
GJ
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://w
Hi,
I've got a lot of databases, unfortunately applications need to get a lot of
data (a lot of means too much).
The whole request takes about 3s, while pure explain analyze only 250ms.
The amount of downloaded data cannot be changed.
I'm just wondering if I can do anything about that.
Maybe there
On Thu, Mar 3, 2011 at 6:41 AM, Nick Raj wrote:
> Which type of data type will be used in above function (in place of ?)
> that can collect more than one row(20,000) ?
>
Maybe the id that those 20M records have in common?
hth,
WBL
On Thu, Mar 3, 2011 at 6:41 AM, Nick Raj wrote:
> Hi,
>
Hi,
I am writing some function in postgres pl/sql.
My function is of type St_ABC((select obj_geom from XYZ),(select
boundary_geom from boundary))
I have table XYZ with 20,000 tuples and in boundary, i have only one
geometry.
In postgres, ST_intersects(obj_geom, boundary_geom) checks each obj_geom
On Thu, Feb 24, 2011 at 1:10 AM, Michael Glaesemann wrote:
>
> On Feb 23, 2011, at 13:49, John R Pierce wrote:
>
> > On 02/23/11 4:44 AM, Stephane Bortzmeyer wrote:
> >>> *3. Start-End IP format :* 1.2.3.0-1.2.3.255
> >> You don't even need to program the conversion, it is already done:
> >>
On Thu, Feb 24, 2011 at 3:03 AM, Tom Lane wrote:
> John R Pierce writes:
> > On 02/23/11 4:44 AM, Stephane Bortzmeyer wrote:
> > *3. Start-End IP format :* 1.2.3.0-1.2.3.255
> >> You don't even need to program the conversion, it is already done:
> >>
> >> % netmask 1.2.3.0:1.2.3.255
> >> 1.
On 02/23/11 1:33 PM, Tom Lane wrote:
The question is does he actually have a use-case for address ranges that
don't correspond to legal CIDR ranges, but do nonetheless have an
identifiable lower boundary, upper boundary, and no holes? And if so,
what is it? The whole thing looked to me like som
John R Pierce writes:
> On 02/23/11 4:44 AM, Stephane Bortzmeyer wrote:
> *3. Start-End IP format :* 1.2.3.0-1.2.3.255
>> You don't even need to program the conversion, it is already done:
>>
>> % netmask 1.2.3.0:1.2.3.255
>> 1.2.3.0/24
> yes, but what about 10.1.2.57-10.1.2.123 ?presum
On Feb 23, 2011, at 13:49, John R Pierce wrote:
> On 02/23/11 4:44 AM, Stephane Bortzmeyer wrote:
>>> *3. Start-End IP format :* 1.2.3.0-1.2.3.255
>> You don't even need to program the conversion, it is already done:
>>
>> % netmask 1.2.3.0:1.2.3.255
>> 1.2.3.0/24
>
> yes, but what
On 02/23/11 4:44 AM, Stephane Bortzmeyer wrote:
*3. Start-End IP format :* 1.2.3.0-1.2.3.255
You don't even need to program the conversion, it is already done:
% netmask 1.2.3.0:1.2.3.255
1.2.3.0/24
yes, but what about 10.1.2.57-10.1.2.123 ?presumably valid in his
range sys
On Wed, Feb 23, 2011 at 05:39:26PM +0530,
Gaini Rajeshwar wrote
a message of 52 lines which said:
> I wanted to store ip addresses in table. I wanted to support the following 3
> types of ip addresses.
>
> *1. Wildcard format :* 1.2.3.*
> *
> *
> *2. CIDR format:* 1.2.
On Wed, Feb 23, 2011 at 02:30:18PM +0200,
Sim Zacks wrote
a message of 97 lines which said:
> a regular varchar or text field.
Very bad idea since they don't support canonicalization (2001:db8::1
== 2001:db8:0:0:0:0:0:1) or masking (set_masklen(address, 20)).
--
Sent via pgsql-general maili
a regular varchar or text field.
On 02/23/2011 02:09 PM, Gaini Rajeshwar wrote:
Hi All,
I wanted to store ip addresses in table. I wanted to support the
following 3 types of ip addresses.
|*1. Wildcard format :* 1.2.3.*
*
*|
|*2. CIDR format:* 1.2.3/24 OR 1.2.3.4/2
Hi All,
I wanted to store ip addresses in table. I wanted to support the following 3
types of ip addresses.
*1. Wildcard format :* 1.2.3.*
*
*
*2. CIDR format:* 1.2.3/24 OR 1.2.3.4/255.255.255.0
*
*
*3. Start-End IP format :* 1.2.3.0-1.2.3.255
I had a look at CIDR
On 2010-12-28, Ozz Nixon wrote:
> Is it possible (and how) to implement a data path on another partition
> (linux) for an existing system? And then if I do not gain anything, merging
> it to the production /data path?
tablespaces
> Scenario of what I want to achieve (/mnt/data is already runni
On Tue, Dec 28, 2010 at 7:56 AM, Ozz Nixon wrote:
> Is it possible (and how) to implement a data path on another partition
> (linux) for an existing system? And then if I do not gain anything, merging
> it to the production /data path?
>
> Scenario of what I want to achieve (/mnt/data is already
Ozz Nixon wrote:
> Is it possible (and how) to implement a data path on another partition
> (linux) for an existing system? And then if I do not gain anything,
> merging it to the production /data path?
I think, you should read our docu about tablespaces:
http://www.postgresql.org/docs/current/
1 - 100 of 565 matches
Mail list logo