Re: i am getting issue as FATAL: password authentication failed for user ' postgres' tried
On Wed, Jun 17, 2020 at 9:14 PM prachi surangalikar wrote: > > hello team, > i have tried every thing but still i could not find the solution to this > problem. > i made changes in the pg_hba.conf file also , please help me to solve this > problem. As a noob to programming and under the hood stuff I have found myself with this issue a few times. My method to 'fix' this problem is likely not sanctioned but as I'm working on my system on my stuff - - - oh well it works even if its NOT elegant! When my password for postgres doesn't work. 1. I try again a couple times 2. still not working I go back into the pg_hba.conf file and comment out the 'new' part where postgresql has moved from no password to password 3. next step is to stop and restart postgresql 4. now you log into postgresql and given postgres a password (I would suggest writing it down - - - - you need to remember it!! One technique I use is to have a password that I can refer to in some way where I don't write out the password. Say you might use something based on a specific bit of text. You write down the directions to that text but not the actual password - - - the directions should be enough of a tip so that you can remember the actual password!) 5. now you again modify your pg_hbs.conf file to require a password I will admit to having occasionally needing to do this a few times in fairly quick succession. Using this method (I don't remove the lines from the hb_pga.conf file I just comment them out) has reduced my stress levels at least a little. This may not be the most secure way of doing things but if your cracker has access to your machine to review conf files - - - - I'd say you have some large issues happening already. Regards
Re: Application Level Encryption
On Sun, Jul 5, 2020 at 1:22 AM Zahir Lalani wrote: > Hello > > > > Does anyone have any serious experience of implementing app level > encryption with strong key management? > > > > If so would like to arrange an offline chat please > Others might be interested as well. Might be useful to have specific questions if a knowledgeable individual did appear. Regards
Re: SSL between Primary and Seconday PostgreSQL DBs
On Thu, Sep 3, 2020 at 5:54 AM Susan Joseph wrote: > > OK, I understand I was just hoping someone could confirm that my settings are > correct. > I didn't come across an error so everything seems to be working I just can't > verify that SSL is working. > Are there any commands you can run to verify that SSL is up and operational? > Testing from a client to the database doesn't prove that database to database > is working. > > I'm most definitely NOT any kind of either Postgresql nor computer security expert. Did a search using 'linus testing SSL' and found some possible options that you may wish to consider. These did not seem to be specific to database access testing but more to website testing. I believe that, website testing, would give you the information you are looking for. Regards
Re: Accounting for between table correlation
On Fri, Jan 15, 2021 at 9:29 AM Atul Kumar wrote: > > Hi Alexander, > > As per Ron, you are not supposed to ask your questions here. > > As According to him, we should keep on doing research on internet rather than > asking for support directly even you have done enough research and until > unless “Ron” won’t be satisfied you have to do keep on researching. > > Mr Atul With respect - - - - When you asked your question I thought you had a valid question but really hadn't done any research for an answer. The list is here to help you help yourself. I have found myself answering my own questions sometimes. But that also means that I now have a better idea of what's going on. Just demanding answers with all the steps - - - - well - - - you are hindering your own learning. I would suggest that instead of barging into others threads complaining that you didn't get the answer you wanted that you try actually working on your own problem. (I am not a senior listee here - - - - just not happy with someone who is grumbly AND doesn't want to help themselves!) (To the admins - - - - if I have overstepped - - - please advise!) Regards
Re: pgBackRest for a 50 TB database
On Sun, Aug 27, 2023 at 10:57 AM Abhishek Bhola wrote: > > Hi > > I am trying to use pgBackRest for all my Postgres servers. I have tested it > on a sample database and it works fine. But my concern is for some of the > bigger DB clusters, the largest one being 50TB and growing by about 200-300GB > a day. > Hopefully you are able to say something but what kind of stuff is being done that generates 2 to 300 GB per day? Genomic research is a maybe that I can think of. (Sorry - - - just a curious Georg ) Regards
Re: Regarding use case of epoch to generate nanoseconds precision
On Wed, May 22, 2024 at 4:21 AM Durgamahesh Manne wrote: > Hi > > Postgres supports only upto microseconds (6 decimal precision). > How do we generate timestamp with nanoseconds as rds postgres not > supported timestamp9 extension ? > Is there a way to generate timestamp with nanoseconds precision on > pg_partman with epoch without typecasting or with typecasting ? > > p_epoch => (to_timestamp(control column)) > Here what is the control column? > How to run it with the create_parent function of partman? > > Here as per the pg_partman doc > p_epoch - tells pg_partman that the control column is an integer type, but > actually represents an epoch time value. Valid values for this option are: > 'seconds', 'milliseconds', 'nanoseconds', and 'none'. The default is > 'none'. All table names will be time-based. In addition to a normal index > on the control column, be sure you create a functional, time-based index on > the control column (to_timestamp(control column)) as well so this works > efficiently. > > Not a postgresql expert but - - - I'm wondering how you actually plan to implement this nanosecond timestamp? You will be working in an area where you will need some extreme corner case equipment for all items in the system. Not saying that measurements in this area can't be done rather that you will have internal ambiguities in your accuracy (network timing for one at the very least). Good luck
2FA - - - was Re: Password complexity/history - credcheck?
On Sun, Jun 23, 2024 at 10:10 AM Greg Sabino Mullane wrote: > On Sun, Jun 23, 2024 at 5:30 AM Martin Goodson > wrote: > >> I believe that our security team is getting most of this from our >> auditors, who seem convinced that minimal complexity, password history >> etc are the way to go despite the fact that, as you say, server-side >> password checks can't really be implemented when the database receives a >> hash rather than a clear text password and password minimal complexity >> etc is not perhaps considered the gold standard it once was. >> >> In fact, I think they see a hashed password as a disadvantage. > > > Wow, full stop right there. This is a hill to die on. > > Push back and get some competent auditors. This should not be a DBAs > problem. Your best bet is to use Kerberos, and throw the password > requirements out of the database realm entirely. > > Also, the discussion should be about 2FA, not password history/complexity. > > Hmmm - - - - 2FA - - - - what I've seen of it so far is that authentication is most often done using totally insecure tools (emailing some numbers or using SMS). Now if you were espousing the use of security dongles and such I would agree - - - - otherwise you are promoting the veneering of insecurity on insecurity with the hope that this helps. IMO having excellent passwords far trumps even 2FA - - - - 2FA is useful when simple or quite easily broken passwords are required. Now when you add the lack of SMS possibilities (due to lack of signal) 2FA is an usually potent PITA because of course SMS 'always' works (except it doesn't()). (Can you tell that I've been bitten in the posterior repeatedly with this garbage?) Regards
Re: How to validate restore of backup?
On Thu, Aug 22, 2024 at 6:24 AM Ron Johnson wrote: > That's great on small databases. Not so practical when they're big. > > So - - - - what is the recommended procedure for 'large' databases? (Might be useful to have a definition for what a large database is as well.) Regards
Re: How to validate restore of backup?
On Thu, Aug 22, 2024 at 8:03 AM Ron Johnson wrote: > On Thu, Aug 22, 2024 at 8:49 AM o1bigtenor wrote: > >> >> >> On Thu, Aug 22, 2024 at 6:24 AM Ron Johnson >> wrote: >> >>> That's great on small databases. Not so practical when they're big. >>> >>> So - - - - what is the recommended procedure for 'large' databases? >> >> (Might be useful to have a definition for what a large database is as >> well.) >> > > "Large" is when it takes too long to run *TWO* text mode pg_dump commands *in > addition to* the pg_dump and pg_restore. > > Hm - - - - I'd say that's about as neat a non-answer as I've ever seen. Can you try again? (You forgot the first question - - - maybe you could try that one too - - - what is the recommended procedure for 'large' databases?) Regards
Re: PostgreSQL reference coffee mug
On Sat, Aug 7, 2021 at 2:48 PM Jehan-Guillaume de Rorthais wrote: > On Sat, 7 Aug 2021 20:44:41 +0200 > Matthias Apitz wrote: > > > El día sábado, agosto 07, 2021 a las 08:06:14p. m. +0200, Karsten Hilbert > > escribió: > > > > > Am Fri, Aug 06, 2021 at 08:09:03PM +0200 schrieb Matthias Apitz: > > > > > > > The prototype is ready. > > > > > > Nice. Now the elephant needs to fade into the background. > > > > It is already in the background (done with libreoffice calc). See the > > attached PDF. Only the black lines of the Elephant logo are a problem. > > Don't know how to solve this. > > Agree with Matthias here, the elephant needs to fade, not just be in the > background. reading black on dark-ish-blue background is quite > uncomfortable. > > Or maybe make it small, below the last line? > > Make the logo a 'watermark'? Regards
Re: Postgresql + containerization possible use case
On Fri, Dec 10, 2021 at 3:24 AM Achilleas Mantzios < ach...@matrix.gatewaynet.com> wrote: > Hi > we are running some 140 remote servers (in the 7 seas via satellite > connections), and in each one of them we run: > - jboss > - postgresql > - uucp (not as a daemon) > - gpsd > - samba > - and possibly some other services > > Hardware and software upgrades are very hard since there is no physical > access to those servers by trained personnel, and also there is a diversity > of software versions. > > The idea for future upgrades is to containerize certain aspects of the > software. The questions are (I am not skilled in docker, only minimal > contact with lxd) : > - is this a valid use case for containerization? > - are there any gotchas around postgersql, the reliability of the system ? > - since we are talking about 4+ basic services (pgsqk, jboss, uucp, > samba), is docker a good fit or should we be looking into lxd as well? > - are there any success stories of other after following a similar path? > > My experience with LXD is that upon install you are now on a regular update plan that is impossible to change. This means that your very expensive data connection will be preempted for updates at the whim of the canonical crew. Suggest not using such (most using such on wireless connections seem to have found the resultant issues less than wonderful - - cost (on the data connection) being #1 and the inability to achieve solid reliability crowding it for #2). IMO HTH
Re: Postgresql + containerization possible use case
On Fri, Dec 10, 2021 at 6:02 AM Achilleas Mantzios < ach...@matrix.gatewaynet.com> wrote: > On 10/12/21 1:24 μ.μ., o1bigtenor wrote: > > > > On Fri, Dec 10, 2021 at 3:24 AM Achilleas Mantzios < > ach...@matrix.gatewaynet.com> wrote: > >> Hi >> we are running some 140 remote servers (in the 7 seas via satellite >> connections), and in each one of them we run: >> - jboss >> - postgresql >> - uucp (not as a daemon) >> - gpsd >> - samba >> - and possibly some other services >> >> Hardware and software upgrades are very hard since there is no physical >> access to those servers by trained personnel, and also there is a diversity >> of software versions. >> >> The idea for future upgrades is to containerize certain aspects of the >> software. The questions are (I am not skilled in docker, only minimal >> contact with lxd) : >> - is this a valid use case for containerization? >> - are there any gotchas around postgersql, the reliability of the system ? >> - since we are talking about 4+ basic services (pgsqk, jboss, uucp, >> samba), is docker a good fit or should we be looking into lxd as well? >> - are there any success stories of other after following a similar path? >> >> > Thanks > > My experience with LXD is that upon install you are now on a regular > update plan that is impossible to change. > > Ehhmmm we are running some old versions there already (jboss, pgsql), LXD > would not differ in this regard. > What do you mean? that the updates for LXD are huge? short spaced/very > regular? > Can you pls elaborate some more on that? > Updates seem to happen very very regularly. That means that the system is often tied up with the updating - - - NOT on doing the function(s). If there are any issues with the newest and bestest version - - - - well you get to deal with not only a hung system (happened a few times whilst I was trying this out (over a longer period of time as well)) but a system that isn't doing what you want it to be doing. I chose to space the updates out to once a month - - - then followed senior dev team suggestions to control that and achieved a system that would not update anything. To make things even more interesting it was not possible to even remove snapd and LXD. I was using rm -r carefully and there was some error message that I no longer remember. End result was that I had to blow the system away and reinstall. I'm not a fan of doing this nor a need to do such to remove any program I choose to remove. My experiences told me that the idea behind this central management (ubuntu controlled updating and upgrading) was most likely designed to facilitate a paid serviced from Canonical which income from would cause a very nice spike in value to Canonical benefiting only a very tiny number of hands. The dev team at LXD was almost shrill in its defense of the 'we know best' thinking that this behavior depicted. Somehow running bleeding edge hasn't ever given me reliability. When it comes to business - - - well I want things to work - - - I'm not a programmer geek who is forever trying to 'improve' something. My existence is not validated by the umpteen hundred versions of my software available out there. My existence is better validated by what I can get done - - - - and not necessarily what someone else says I have to do right now (even in the middle of the night!!!). Does that help? > This means that your very expensive data connection will be preempted for > updates at the whim of the > canonical crew. Suggest not using such (most using such on wireless > connections seem to have found > the resultant issues less than wonderful - - cost (on the data connection) > being #1 and the inability to achieve > solid reliability crowding it for #2). > > Crew has their own paid service. Business connection is for business not > crew. > What I am interested is, could docker be of any use in the above scenario? > Containerization in general? > Know nothing about Docker and as a result of my foray into containerization - - - - well - - - - I'm not a fan at present. Much more likely to do something like set up a RaspberryPi and then use that to do things and if more is needed well - - - I'm considering micro-controllers linked into SoCs (not necessarily RaspberryPi but similar) and then possible one central perhaps full size server - - - but then that server would be busy. I also am using test systems for any level of system so I'm experimenting on testing systems and things don't move to the 'real work horses' until I'm happy that things are stable and do what I want them to do. Doesn't necessarily make for cheap but it has upped reliability and reduced stress (when a prima
Re: Can we get the CTID value
On Wed, Jan 19, 2022 at 1:39 PM Garfield Lewis wrote: > > Hi, > > > > I am creating a new type and would like to know if it was possible to access > the CTID for the row affected by the INPUT and RECEIVE functions of the new > type? Actually, would it be possible from the OUTPUT and SEND functions as > well? > > Please - - - what is CTID? TIA
Re: Corruption of few tables
On Tue, Feb 25, 2025 at 6:03 AM sivapostg...@yahoo.com wrote: > > Hello, > > Using PostgreSQL 11.11, compiled by Visual C++ build 1914, 64-bit with > Powerbuilder > > Recently, in one of our customers' places, we got an error while saving a > record from our custom-built application using PowerBuilder. While saving > a record, we compare a value from a table with a hard-coded value in > application. Here the select command failed to fetch that particular record > from the table and so the save failed. > > When we tried to read data from randomly selected tables [ from PGAdmin], we > found that records could not be read from a set of consecutive tables. We > re-started server [ Windows Server 2012 ] and tried again to read those > tables. This time the failure was with only one table and select was success > in all other tables. > > So we dropped that particular table [ around 33000 records ], after removing > all foreign key links and re-created that table and inserted all records from > a backup set. > > The problem got solved. Since this is not the first time, we are getting > this error, I want to know > > 1. Why is corruption happening? How to find out the reason? > 2. What should I do to prevent this from happening again? > > Any steps or suggestions to avoid such corruptions is really appreciated. > Not really a direct answer - - - but - - - have you considered updating Postgresql. Version 11 was EOL (end of life) over a year ago. Version 17 according to the charts I've found is the current version. HTH