On Sun, Jul 4, 2021 at 1:20 PM Adrian Klaver
wrote:
>
> In any case I don't see you getting a 9.5 version on the laptop in the
> package directories. Pretty sure the Fedora 30 repos will not have 9.5
> and the Postgres repos don't go back to Fedora 30. So if you want a 9.5
> instance you will nee
On Mon, Jun 7, 2021 at 2:01 PM David Gauthier
wrote:
> Thanks Joe. I think the nonweekendhours solution should be good enough
> for what I need.
>
> Yes, holidays too would be the best. But for practical purposes,
> excluding Sat&Sun is good enough for this particular problem.
>
I've solved th
On Mon, Jun 7, 2021 at 9:24 AM Alan Hodgson
wrote:
> On Mon, 2021-06-07 at 09:22 -0700, Rich Shepard wrote:
>
>
> salmo, 127.0.0.1 is the server/workstation that has everything installed.
> It
> is localhost.
> 127.0.0.1 localhost.localdomain localhost
> 127.0.1.1 salmo.appl-ecosys.
On Mon, Jun 7, 2021 at 5:06 AM Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:
>
> I got a feeling it sounded rude to the top post, despite me not even
> having an iota of intention to think that way.
>
It's not so much that it is rude in the way of typing in all-caps, it's
more that it
f they are
idle for even a brief period.
On Thu, May 27, 2021 at 3:35 PM Rob Sargent wrote:
> On 5/27/21 4:25 PM, Sam Gendler wrote:
>
> That sure looks like something is causing your connection to have a
> transaction rollback. I haven't worked in Java in far too long, but it
&
mine if the database connection is open. I can
just about guarantee that your connection pool has a parameter which allows
you to specify a query to execute when a connection is requested.
On Thu, May 27, 2021 at 2:58 PM Rob Sargent wrote:
> On 5/27/21 3:08 PM, Sam Gendler wrote:
>
> The
The same JDBC connection that is resulting in lost data? Sounds to me like
you aren't connecting to the DB you think you are connecting to.
On Thu, May 27, 2021 at 2:01 PM Rob Sargent wrote:
> On 5/27/21 7:45 AM, Philip Semanchuk wrote:
>
> On May 26, 2021, at 10:04 PM, Rob Sargent
> wrote:
On Thu, Sep 24, 2020 at 10:40 PM wrote:
>
> Well not partial as in incremental. Instead dump only some portion of the
> schema with or without its associated data.
>
> It's funny that you should bring that up, considering how it was one of my
> points... See the point about pg_dump's bug on Windo
On Sun, Jul 5, 2020 at 11:41 AM Michel Pelletier
wrote:
>
>
> I'm working on an approach where the decrypted DEK only lives for the
> lifetime of a transaction, this means hitting the kms on every transaction
> that uses keys. It will be slower, but the time the decrypted key stays in
> memory w
On Fri, Feb 7, 2020 at 11:14 AM Justin wrote:
>
> On Fri, Feb 7, 2020 at 1:56 PM Sam Gendler
> wrote:
>
>> Benchmarks, at the time, showed that performance started to fall off due
>> to contention if the number of processes got much larger. I imagine that
>> th
On Fri, Feb 7, 2020 at 5:36 AM Steve Atkins wrote:
> What's a good number of active connections to aim for? It probably depends
> on whether they tend to be CPU-bound or IO-bound, but I've seen the rule of
> thumb of "around twice the number of CPU cores" tossed around, and it's
> probably a dece
If I was in a hurry to implement this, and I had a userbase that wasn't
very experienced with managing relational databases, I'd write some code to
automatically and periodically build a docker image with the latest data in
it (however often is sufficient to meet your needs), and then I'd set up a
On Sun, Apr 8, 2018 at 15:37 g...@luxsci.net wrote:
>
>
> On April 8, 2018 02:40:46 pm PDT, "Guyren Howe" wrote:
>
> One advantage to using logic and functions in the db is that you can fix
> things immediately without having to make new application builds. That in
> itself is a huge advantage,
Why not use EBS storage, but don’t use provisioned iops SSDs (io1) for the
ebs volume. Just use the default storage type (gp2) and live with the 3000
IOPS peak for 30 minutes that that allows. You’d be amazed at just how much
I/o can be handled within the default IOPS allowance, though bear in mind
I think there's a more useful question, which is why do you want to do
this? If it is just about conditional backups, surely the cost of backup
storage is low enough, even in S3 or the like, that a duplicate backup is
an afterthought from a cost perspective? Before you start jumping through
hoops
15 matches
Mail list logo