;
fromwhatever
where blah
)
select
from
foobar a
join
int_seq b
on
a.foo = b.order_by
order by
b.order_by
,
This dodges the tmp table and the optimizer can inline the
results, probably gets you the fastest result.
--
Steven Lembark
Workhorse Computing
lemb...@wrkhors.com
+1 888 359 3508
On Sat, 18 Feb 2023 12:43:42 -0600
Ron wrote:
> > I think the answer is no but wanted to confirm here. this is what
> > my best friend told me.
There are caches for prepared statements, table rows, indexes.
What about the caches are you interested in?
--
Steven Lembark
Workhor
| not null default 1 numwinners
> | integer| not null default 1
> numrunners | integer| not null default
> 1 numactiverunners | integer| not null
> default 1 totalmatched | numeric(15,2)
and horrid enough (5.00503 compatibility, really?) that
it's easier to just shell-script both builds and run it overnight.
Q: How un-optimized and ancient is the PG on centos?
--
Steven Lembark3646 Flora Place
Workhorse Computing
y to hamstring Perl at every step. Unless
things have changed massively, the Perl they distribute is not
only out of date it's nearly broken out of the box. Try running
"perl -V" sometime and review the "config_args" values.
enjoi
--
Steven Lembark
geable, as is Perl's, and you'll get something
that is reasonably optimized for your platform and use.
--
Steven Lembark3646 Flora Place
Workhorse ComputingSt. Louis, MO 63110
lemb...@wrkhors.com+1 888 359 3508
> 2. postgres has not implicit plan cache, so every INSERT planned
> again and again
Good point: If you are doing something in a loop you probably
want to use prepare_cached() to get the $sth, for both extract
and insert.
--
Steven Lembark3646
ly nice is managing the copy: generating a
useful basename, determining the path, deciding whether to zip
the output, etc. Using metadata to determine which of the tables
to back up and where to put the result, all of the automation
you'd want to get flexible backups is nice
elect
t1.a
, t1.c "c1"
, t2.c "c2"
from
t1
full outer join
t2
on
t1.a= t2.a
and
t1.b= t2.b
) z
on
base.a = z.a
;
--
Steven Lembark
es on a:
select
distinct
base.a
, z.c1
, z.c2
from
base
left join
(
select
distinct
t1.a
, t1.c "c1"
, t2.c "c2"
from
t1
full outer join
but you will need
to be specific as to the stats and situation in which they were
acquired so that people have enough context to give you a reasonable
answer.
--
Steven Lembark3646 Flora Place
Workhorse ComputingSt. Louis, MO 63110
lemb...@wrkhors.com+1 888 359 3508
ta -- it may be cheaper in the long run
to use "gzip --fast" with smaller, more repeatetive content than
to pay the extra CPU charges for "gzip --best".
--
Steven Lembark3646 Flora Place
Workhorse Computing
t that from a temp table generated by the sequences
and you'll have a complete schedule.
--
Steven Lembark 3920 10th Ave South
Workhorse Computing Birmingham, AL 35222
lemb...@wrkhors.com+1 888 359 3508
est.
If you have specific questions about the PG version, its features,
or how to translate specific Oracle-isms into PG please feel free
to ask -- that's what we're here for :-)
--
Steven Lembark 3920 10th Ave South
Workhors
support and you can have whatever you need.
Which gets back to the ecosystem: The PG *product* includes a level
of flexability not available in SQL Server -- or Oracle for that
matter. SQL Server was heavily influenced by Sybase, which was based
on Ingres, which worked because it was an open, flexib
> Through a pipe to a different server.
No access to a server w/ sufficient space.
Looks like a compressed dump + restore...
Thanks
--
Steven Lembark 3920 10th Ave South
Workhorse Computing Birmingham, AL 35222
l
On Wed, 3 Apr 2019 08:33:54 -0500
Ron wrote:
> On 4/3/19 8:18 AM, Steven Lembark wrote:
> > Trying to find a way of moving a large table between databases
> > in the same cluster. There is not sufficient space to copy the
> > contents -- the dedicated tablespace that fits
the tablespace in common, is there any way to move
the contents without a dump & reload?
Thanks
--
Steven Lembark 3920 10th Ave South
Workhorse Computing Birmingham, AL 35222
lemb...@wrkhors.com+1
For small, lightweight, portable SQLite is quite nice for times you
don't want to create a full PG installation and still prefer SQL for
lookups.
--
Steven Lembark 3920 10th Ave South
Workhorse Computing Birmingham, AL
op execution without
having to nest everything one level deeper.
--
Steven Lembark 3920 10th Ave South
Workhorse Computing Birmingham, AL 35222
lemb...@wrkhors.com+1 888 359 3508
0, ... 24:00:00, '[)] )
Nice thing about the partial index is that you can create it
on all of the non-ts fields for fast lookup by whatever and
only index the portion for today.
* Think about using a materialized view rather than a temp
table. May prove si
all
sorts of rouding and conversion issues.
Frankly, if you have to run the query more than once I'd suggest
adding a view that does the select/convert for you (along with
dealing with any NULL's that creep into things). PG makes it quite
easy to add the view and quite in-expensiv
will normally not have more than one
second precision on times (timestamps are a different matter).
Q: What database are you using?
Postgres makes it easy enough to cast the values or compute the
difference is seconds.
--
Steven Lembark 3920 10th Ave
;Config dir:';
ls -al $PWD;
exit 0;
Result: You have the standard paths where PG expexts them and
ls -al (or readlink) will tell you which host they were generated
for.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
uire some
specialized database service for a data lake.
Short of that look up the "jsonb" data type in Postgres.
The nice thing about using PG for this is that you can keep enough
identifying and metadata in a relational system where it is easier
to query and the documents in
instances.
If you need more help with this feel free to contact me outside
of the list.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
21, 2 :
- Ben
- Scott
25, 2 :
- Anderew
- Larry
- Adam
...
Another Village :
...
Or your could break it into chunks using multiple documents within
the YAML file (notice the extra '---'):
---
Malayan Village :
21, 2 :
- Ben
- Scott
25, 2 :
- Anderew
- Larry
ppy, I just don't see the reasoning
behind having the enum's supported automatically and the composite
not handled.
enjoi
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
On Thu, 10 May 2018 14:41:26 -0400
Tom Lane wrote:
> [ please keep the list cc'd ]
>
> Steven Lembark writes:
> > On Thu, 10 May 2018 11:52:48 -0400
> > Tom Lane wrote:
> >> Maybe you should show a more concrete example of what's not
> >> wo
t is the correct syntax for
*select ... ?
*/
create or replace function
lat_lng_text
(
lat_lng_t
)
returns text language sql strict immutable as
$$
select ($1).lng::text || '-' || ($1).lat::text
$$
--
Steven Lembark
ntirely. This would use the existing framework for detecting
the context of '\' as a command, just with a different magic
char.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
tested.
At some point the data will have to hit code, especially in ETL or
reporting logic. Balancing the work in and out of the database
just makes sense. Which is probaly why it doesn't happen...
--
Steven Lembark 1505 N
know the table names, how can they write those stored
> procedures?
One of the main reasons DBA's need to be pigs. Much spaghetti can be
avoided in ORM frameworks with well-designed views.
--
Steven Lembark 1505 National Ave
Workhorse Computing
It will probably
> not cover the case when a record is INSERT'ed and then DELETE'd after
> the calculation has begun, but such cases should be even more rare than
> the DELETE's we're currently facing.
Thing about using a couple of Materialized
ting data into separate [materialized]
views or temp tables keeps the joins more manageable and helps
release resources that might otherwise get consumed for the entire
query.
--
Steven Lembark 1505 National Ave
Workhorse Computing R
eqirements and keep
them up to date?
When that happens people will use the databases as tools, ORM code
will be [semi-] maintainable, and we'll live happily for ever after.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
d be a lot of RAM just to hold all the locks.
All the more reason to use temp tables.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
ry table
foobar
select
from
;
This isolates the lock time to performing the bare select, after
which you can coalesce and sum to your heart's content without
locking any of it.
The point is performing the absolute minimum of processing to
generate the temp table so as to release any locks quickly and
avoid "group by" in the main join.
Yes, this might end up creating a large-ish temp table :-)
One other approach would be selecting only incremental data
(e.g., daily) which locks a much smaller subset of the rows
and aggregating the daily totals into quarterly, whatever.
Call it daily usage, select where usage_date = today's or
timestamp && a tstzrange of ( , 2400, [) ). That might
also simplify your query logic: all the coalesce op's end up
in your daily/weekly/monthly/whatever summary, the quarterly
values in the reporting are just sum X group by.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
g to run this, say, daily it's easy enough to create
a view and just "create temporary table foo as select * from bar"
for some collection of views and go from there. This makes it easier
to tune the queries on the back end without having to hack the front
end
nd run automatically. This makes it really easy
to test multi-language interfaces to the database with a single
harness.
enjoi
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
nstraints -- the other table has to ref this table's PK.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
that transaction. At that point the
database can be queried for data, updated as necessary, audited
against external data.
If you have a design with un-identified data it means that you havn't
normalized it properly: something is missing from the table with
un-identifiable rows.
--
S
temporary table, process them,
post-process them into variables or destination table?
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
little confused as to shared with schema or databases or both.
Suggest looking at the Xreme Data product. It is a parallel,
shared-nothing implementation of PG that should solve your
needs rather nicely.
You can see a description of their product at
https://xtremedata.com/
Happy scaling :-)
> In my case the upgrade routines run migration scripts which modify a
> database schema. Besides that I have a bunch of SQL files with the
> schema that can be used to initialize a new database.
Why not just update the proc's, etc, that depend on the changes?
--
purpose is to keep the database
alive long enough to recover the "real" server.
Q: Why not just use RDS?
It'll be simpler.
--
Steven Lembark 1505 National Ave
Workhorse Compuing
g this database might be a lot simpler with a few that
breaks the sub-fields out, or which has indexes on the sub
-fields within the packed data.
--
Steven Lembark 1505 National Ave
Workhorse Computing
Failed execution: $@";
which will be nearly as effecient in the long run.
That or just import the data from a csv/tsv (there are good
examples of data import available in the PG docs).
--
Steven Lembark 1505 National Ave
Workhorse Computing
> I'm pretty new to postgres so I haven't changed any configuration
> setting and the log is a bit hard for me to make sense of :(
Diving into the shark tank is a helluva way to learn how to swim :-)
Are you interested in finding doc's on how to deal with the tuning?
ke disable
indexes during loading, etc. Maybe load them into temp tables and
then insert the temp's into the destination tables. The point is to
amortize the memory load over the entire load period.
--
Steven Lembark 1505 Na
and see if that solves your problem; if so then
how much swap is in use when you get the OOM error?
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508
51 matches
Mail list logo