t with via management channels rather
than technical ones. Besides, the likely outcome will be your developers
will just adopt the practice of adding a serial column to every table,
which in itself doesn't really add any value.
Tim
--
Tim Cross
e, of course, unit tests are
> only as good as you imagination in devising tests.
+1. And a good test of your underlying data model is whether you can
identify a natural primary key. If you can't, chances are your model is
immature/flawed and needs more analysis.
--
Tim Cross
shots for backups
etc. In general, I usually find it best to work with the system admins
and follow their recommendations. Provide them with details of your
performance requirements and where you feel resource demands may peak
and let them propose what they feel would be best suited given whatever
inf
lass
> org.postgresql.ds.PGSimpleDataSource java.langNumberFormatException:for
> input string: "5432,10.16.10.13:5432 <http://10.16.10.13:5432>"*
>
> It looks like the first IP address has disappeared or is not set right.
>
> Thanks in advance for the help.
>
>
>
>
--
regards,
Tim
--
Tim Cross
values are just concatenated into the string?
thanks,
Tim
--
Tim Cross
tart?
After a Ubuntu upgrade, there is usually an upgrade log you can also
check to see if anything failed or provided warnings regarding
additional actions required.
HTH
Tim
--
Tim Cross
get the right balance between business
responsiveness to change and long term maintenance/viability.
Unfortunately, such developers are rare, so it will usually mean there are
a team of people with different skills and what will matter is how well
they are able to work together as a team and come up with an architecture
which satisfies the business requirements.
--
regards,
Tim
--
Tim Cross
evitably give up as they find that in the end,
they needed to maintain separate SQL or have separate database
maintenance teams anyway. You will only get seamless SQL across
different databases if your SQL is very basic, in which case, you
probably don't need a full blown RDMS anyway. Most of the time, your
choice of database will be dictated by your dominate platform in the
market your application targets.
--
Tim Cross
. The wrapper
script uses that file to determine what databases to connect to or what
is the user default database cluster. It can also be overridden with a
local ~/.postgresqlrc, so check there are no old settings there as well.
Tim
--
Tim Cross
quot; policy,
just use full disk encryption and state that all data is encrypted on
disk" and your done.
Tim
--
Tim Cross
Then of course there is the bureaucratic protection - "Yes boss, all our
data is encrypted on disk."
Tim
"
--
Tim Cross
on the
>> drives when they leave the data center for replacement or disposal.
>
> That makes some sense, but years of added CPU overhead to mitigate a problem
> that could be solved by writing zeros to the disk as a step in the decomm
> process seems more than a bit wasteful.
Problem is that decomm process relies on someone actually following the
process. Too often, this part fails. The overhead with WDE these days is
minimal anyway. Good security is always about layers of protection and
should never just rely on a single control.
Tim
--
Tim Cross
highest data reliability we can get. The columns in the
table are small i.e. 7 double precision, 2 integer, 1 date and 2
timestamp.
Platform is RHEL, Postgres 9.6.8, filesystem xfs backed by HP
SAN. Current wal_sync_method is fsync.
Tim
--
Tim Cross
Andres Freund writes:
> Hi,
>
> On 2018-04-23 08:30:25 +1000, Tim Cross wrote:
>> the recent article in LWN regarding issues with fsync and error
>> reporting in the Linux kernel and the potential for lost data has
>> prompted me to ask 2 questions.
>
> No
> Thing about using a couple of Materialized Views for the worst
> part of it.
+1 re: materialised views - I have found them to be extremely useful for
situations where you want a snapshot of data and need to present it in a
way which is easier to process, especially when the underlying data is
changing faster than your reporting process can generate the report.
--
Tim Cross
ester periods, but often cycle between two
addresses, their college and their parental home). The downside of
this approach is that applications which insert this information must
remember to execute both SQL statements. If you have multiple
interfaces, this might become a maintenance burden (one of the
advantages of using a DB function).
Tim
--
Tim Cross
ake sure that
Postgres hasn't been installed as a snap package (I think the command is
snap lis). I recall when I tried to install postgres in 17.10, which I did
from the 'software centre' on ubuntu, it initially installed it as a snap.
I had to remove the snap and then use apt to get the normal deb package
install. the snap package system does not use the normal locations for
config files.
This could also be a ubuntu 18.04 issue. This version was only released a
couple of weeks ago and it is the first version which has Postgres 10 as
the default. I would try the following
1. use systemctl to stop postgresql service
2. remove all postgres packages making sure all config files are also
removed
3. Use synaptic to make sure all postgres package and associated config
files have been removed.
4. Reboot
5. run apt update and then apt upgrade
6 re-install using apt (not the software centre).
--
regards,
Tim
--
Tim Cross
ction
cannot be viewed in a single screen, it is probably too big and trying
to do too many different things which should be broken up into smaller
functions.
regards,
Tim
--
Tim Cross
, apt, uses gpg keys to verify the
authenticity of packages it downloads. My guess is that previously, you
only needed to ensure the package had a dependency on apt-key and now
apt has/is changing such that you need to have an explicit dependency on
either gnupg or gnupg2.
--
Tim Cross
Moreno Andreo writes:
> Hi Tim,
>
> Il 29/05/2018 00:06, Tim Cross ha scritto:
>> Moreno Andreo writes:
>>
>>> Hi folks,
>>> I'm trying to install Postgresql 9.6 on a test machine in Google Cloud
>>> Platform
>>> After a
y if DNS related issues could be the cause
or whether it is just an AD specific issue.
Definitely check AD logs as well - the issue could be simply that adding
a new system has increased demand sufficiently to degrade performance of
AD (though I would expect there would be complaints from others outside
the DB area if this was the case).
The GSSAPI approach is not as complicated as it sounds, but it can be
affected by environment/infrastructure architecture and it will be
critical to ensure you have good time synchronisation. This can be
somewhat challenging in hybrid environments where you have a mix of
local and remote services. When it all works, it is great, but when you
do have a problem, diagnosis can be challenging.
The overall approach of having one identity with one password per entity
is IMO the right approach and your only hope for good password policy
application. However, getting to that point can be very challenging.
--
Tim Cross
along with other options) which needs to be
addressed.
I'm not aware of any guideline or school of thought which rules out this
as an option. Possibly the reason it appears to be used infrequently is
because it doesn't realise the benefits you might expect or is simply
not an problem in a majority of use cases.
Tim
--
Tim Cross
ting
frustrated as their service was being blamed for a local client
problem.
Key take away, this stuff can be complex to diagnose and a systematic
evidence based investigation is often required - problem is, that takes
time and is seldom welcomed.
--
Tim Cross
low friction solutions.
For now, I'll just take a leaf out of 'the Guide', grab my towel and not
panic!
Tim
--
Tim Cross
ges have
support for processing CSV files, so you may be better off writing a
small 'wrapper' app which uses the same drivers and assuming your
database connectivity has been abstracted into some sort of
module/library/class, use the same interface to write to the database
that the application uses.
Tim
--
Tim Cross
from
some errors or providing more specific detail regarding the cause of the
error.
Be wary of what indexes your defining on your table. Depending on the
type and number, these can have significant impact on insert times as
well.
--
Tim Cross
On Mon, 25 Jun 2018 at 11:38, Anto Aravinth
wrote:
>
>
> On Mon, Jun 25, 2018 at 3:44 AM, Tim Cross wrote:
>
>>
>> Anto Aravinth writes:
>>
>> > Thanks for the response. I'm not sure, how long does this tool takes for
>> > the 70GB dat
o the copy command until you get the right
combination of data format and copy definition.
It may take some effort to get the right combination, but the result is
probably worth it given your data set size i.e. difference between hours
and days.
--
Tim Cross
l keys are superfluous.
Have you tried doing the same thing where the fk keys and remote unique
index keys are equal in number?
--
Tim Cross
retty straight forward to drop the indexes and recreate them afterwards
via sql, so we didn't look for a tool as such.
As data is only inserted into this table and only by this process, we
also turned off autovacuum for this table, performing vacuum and analyze
manually after load.
Tim
--
Tim Cross
can then be performed by less skilled (and cheaper)
staff. Reality is, you probably want your more skilled and experienced
staff dealing with deployments and they will typically prefer the
flexibility of scripts over the constraints of a GUI.
Given the high level of variability in environments, you are probably
best off developing the process and scripts rather than trying to find
an existing tool. Putting a web front end is likely easier than finding
a tool flexible enough to fit with the environment which avoids
situations where the tool begins to dictate how you operate (tail
wagging the dog).
Tim
--
Tim Cross
inally promised, plus OSX/macOS has not made Java as welcome as
it use to be. If you do choose Java, it will need to work under openJDK
as this is what most Linux users will have installed.
Tim
--
Tim Cross
tion is misleading or confusing documentation.
My only real concern would be to further fracture the PG user base. If
there are barriers preventing users from adding documentation to the
existing documents or wiki, perhaps it would be better to try and
address those first?
Tim
--
Tim Cross
Dmitry Igrishin writes:
> пн, 16 июл. 2018 г. в 1:14, Tim Cross :
>
>>
>> Your idea to make it integrate with user's preferred editor is a good
>> idea as editors are like opinions and certain anatomical parts -
>> everyone has one! Finding an appropriate AP
t is the interface that is the problem, we should try to address
that first rather than simply switching to something new which will have
its own problems. However, I don't know if this is the case or not.
Tim
--
Tim Cross
Peter J. Holzer writes:
> On 2018-07-18 08:09:35 +1000, Tim Cross wrote:
>> If using web widgets to author content on the wiki is the main
>> impediment for contributing content, maybe we should see if the wiki
>> provides alternative access methods. I've us
is the case especially in a library, it implies
> the user has their own computer. As libraries allow users/citizens to
> request books be purchased >at no cost to the user/citizen, the
> argument that someone cannot afford a book is now a moot point.
>
> --
> *Melvin Davidson*
> *Maj. Database & Exploration Specialist*
> *Universe Exploration Command – UXC*
> Employment by invitation only!
>
--
regards,
Tim
--
Tim Cross
outright blocking
of access.
Finally, it is also important that all staff are aware of the
organisations policies, procedures and controls regarding data
access. They need to know what is expected of them and what is
unacceptable.
--
Tim Cross
manually or are such references abstracted such that
> the column name "text" is irrelevant tot he actual structure of the
> index?).
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
--
regards,
Tim
--
Tim Cross
On Mon, 13 Aug 2018 at 12:23, Olivier Gautherot
wrote:
> On Sun, Aug 12, 2018 at 11:06 PM, Tim Cross wrote:
>
>>
>> On Mon, 13 Aug 2018 at 11:24, Adrian Klaver
>> wrote:
>>
>>> On 08/12/2018 05:41 PM, Samuel Williams wrote:
>>> > I wish the d
/env bash
.
>> Also, it is open source?
>
> No, it is freeware
oh well, too bad.
--
Tim Cross
ate database
procedures which present a 'mapped' view back to the framework layer
which hides the SQL from the framework. Works well, with the only main
downside being you now have SQL in a different (another) place, which
can make some people uncomfortable and can be a maintenance issue if all
your developers are just front-end devs who treat a database as just a
key/value repository. .
Tim
--
Tim Cross
ch requires a 'generic' account with shared
passwords. Apart from the security implications, you will almost
certainly run into problems with auditors and many regulatory
standards.
--
Tim Cross
and large this ends up being really depends on the
particulars of the environment i.e. size, mix of platforms, etc.
I would start with either openLDAP or Active Directory (depending on
environment) and then PAM if more fine grained control is required that
cannot be satisfied via ldap/ad.
Tim
--
Tim Cross
nquiries and questions and inability to adapt to
changing business requirements in a timely manner. This is often the
most frustrating part - you can be an excellent technical person able to
define and implement really good technical solutions, but if the
customer is unable to use the solution effectively, it will be seen as a
technical failure.
Tim
--
Tim Cross
#x27;y2012')
>
> select t1.cola t1.colb, t1.colc, t2.y2012 from . Where t2.y2012 != 0;
>
>
> to generalize
>
> select * from FUNCTION( year_column )
>
> select t1.cola t1.colb, t1.colc, t2.year_column from . Where
> t2.year_column != 0;
>
> is it possible? if so how?
>
>
>
--
regards,
Tim
--
Tim Cross
t into a
data model and then start development. Have a look at
https://en.wikipedia.org/wiki/Database_normalization for some background
on the normal forms and why they are useful.
HTH
Tim
--
Tim Cross
ribed so far which would indicate a necessity to have
more columns as you increase the number of agents. It would be normal to
have something like
| agent_id | year | cash_on_hand | bank | creditors | debtors |
and queries like
select sum(cash_on_hand)
from table
where agent_id = 'agent1'
and yesr = 2018;
to get the sum of cash on hand for agent1 in 2018.
instead of something like
| agent1_cash2018 | agent2_cash2017 |
which will not work well.
Tim
--
regards,
Tim
--
Tim Cross
his is
>> postgres compiled as a linux binary is running on MS's new-ish linux
>> emulation.
>
> Whee ... so you get to cope with all the bugs/idiosyncrasies of three
> operating system layers, not just one.
That comment has made my day - thanks Tom!
--
Tim Cross
ualbox with Linux, though these
days, the Windows environment is often too locked down to allow
this. I've not yet experimented with the virtual linux layer in w10.
--
Tim Cross
values 9,999 and
are the same values. Things can quickly become complicated as you can
have locale information at both the server and client end and they may
not be the same.
As you should always be sanitising your data before inserting into the
database anyway, you may as well just add this as another check at the
client end.
Tim
--
Tim Cross
possibly even
partitioning problems (depending on number of clients and typology
etc). These are well known problems in distributed or replication
systems.
You appear to be implementing a 'poor mans' replication system. There
are lots of complex issues to deal with and I wonder why you want to
take them on when PG has already got well tested and robust solutions
for this that would simplify your architecture and avoid the need to
re-implement functionality which already exists?
regards,
Tim
--
Tim Cross
u are in an impossible position and have no
hope of implementing anything that will be maintainable and you will
never be able to manage security.
I'm hoping you mean different agencies which need to add/modify rows
wihtin the tables?
--
Tim Cross
- Make sure you have good logging enabled and check for things like
overly frequent writing of WALs. This can have significant impact on
performance. If your rows are large, you may be adversely impacting
performance writing the WAL cache etc.
--
Tim Cross
re you are up-to-date wrt latest
minor release for that version and would use clients with the same
version as the master.
--
Tim Cross
ltiple timezones, so even if your data is all
associated with a single country, you can potentially have multiple
conversion routines required. On most *nix systems, clock time is UTC as
well, so having everything in UTC really helps when you want to do
diagnosis across your database and system log files etc.
--
Tim Cross
se to store ALL
session data remotely in the database. I have seen situations with a
very specialised application where having a more full featured LOCAL (to
the client) database server to record session information can be useful,
but this is rare.
--
Tim Cross
are also red hearings -
you don't have a web server inside the browser either. If your not
trying to develop inside the browser but are in fact developing at the
node.js level, then you don't need webpack.
I think what you really need to do is step back and look closer at your
architecture. Typically, you would put all your database interaction
stuff in the web server using node.js. A common design pattern would be
to use one of the node.js web servers, like express (but there are
others) and have something like nginx as a proxy server in front of it.
You would then wrap your database interaction in a node.js API which you
would then call from your client browser using http/https or web
sockets.
--
Tim Cross
esources to establish, it does tend to result in
reduced maintenance costs in the longer term.
--
Tim Cross
Geoff Winkless writes:
> On Wed, 6 May 2020 at 00:05, Tim Cross wrote:
>> Where Tom's solution fails is with smaller companies that cannot afford
>> this level of infrastructure.
>
> Is there an objection to openldap? It's lightweight (so could
> reasonably b
set a flag so that if
something failed unexpectedly, you still have the report).
There are ways you can trigger periodic activity in the database, but to
be honest, CRON is easy and reliable and avoids needing to add
additional extensions etc to the DB. Turning off the report, changing
the time etc, is a simple crontab edit.
--
Tim Cross
eed to know how
the system will be used and what the expectations of the users are.
Maybe there is a legitimate business case to allow partial data entry,
in which case, you may need a different approach or a way to identify
partial/incomplete records etc.
--
Tim Cross
ure tag to the repository definition
in the source list file i.e. /etc/apt/sources.list or
/etc/apt/sources.list.d/postgres.list (or whatever you have called
itIf). Try adding the arch option as
deb [ arch=amd64 ] http://.
deb-src [arch=amd64 ] ...
This should tell apt to only look for the amd64 packages.
--
Tim Cross
JSON in the database and writing
JSON queries using PG's SQL JSON support).
--
Tim Cross
the
appropriate column. If you just want to protect against accidental
modification of the data or have reasonable confidence (for some measure
of reasonable), just having a checksum hash may be sufficient. big
advantage with the simpler case with no actual data encryption is that
other clients can acces
Zwettler Markus (OIZ) writes:
> Hi Marco,
>
>
>
> How do you handle these conflicts? No longer updating that regularly or not
> at all anymore?
>
Not doing the updates is a poor option due to the potential security
vulnerabilities this may lead to. Likewise, delaying the application of
upda
ronment where someone else is responsible for all the DBA
stuff, Oracle is nice to work with. However, you tend to only be in that
situation when your working in a large, usually bureaucratic,
environment, which tends to detract from the whole experience in other
ways. If your unlucky enough to also be using any of the Oracle 'value
add' extensions, development frameworks, application layers etc, it is
really horrible and mind numbing.
apart from this, Oracle licensing is an absolute nightmare. Apart from
the expense, the complexity is unbelievable and it is almost impossible
to know with any certainty what you will be paying in 12, 24 or more
months.
--
Tim Cross
probably just a dinosaur - I also prefer VI and Emacs as my primary
development environments and will use psql and sqlplus before Taod,
pgAdmin, sqlDeveloper etc.
Tim
P.S. for moving Oracle databases, we use to just use sed and change the
paths in the control file. Worked remarkably well. Often used this
technique to 'refresh' our dev or testing systems to current prod data.
--
Tim Cross
te
ways to work around them which are almost certainly going to be even
worse from a security perspective.
--
Tim Cross
than the last 'nextval' (up
until maxvalue). It is unaware of the use i.e. whether it will be used
in a insert or what table that insert is against. So I'm not sure what
your concern with a partitioned table is? Can you elaborate?
--
Tim Cross
he same value. You cannot make any additional
assumptions e.g. cannot assume gid values will be inserted in order or
there won't be 'gaps ' etc.
--
Tim Cross
utions will allow you to set a max pool size. In addition to
enabling you to 'reserve' a set number of connections for a client, you
will know which client seems to be running out of connections, helping
to identify the culprit.
--
Tim Cross
at an update will be slow (or more specifically too slow for
acceptable performance, whatever that is). Assuming you will need to
take lots of special action may be premature - you may need to do none
or only a couple of things. Get a baseline first and you will know how
big of an issue you have. You will also be able to determine if what you
try has any benefit.
--
Tim Cross
Bee.Lists writes:
>> On Jun 23, 2020, at 8:09 PM, Tim Cross wrote:
>>
>> Sounds like your web app may not be closing connections once it has
>> finished with them? The fact your seeing idle connections would seem to
>> support this. I would be verifying t
in developers
not understanding the power of the underlying RDMS and encourages poor
DB schema design. Those who support such technologies typically point to
the benefits of database neutrality such systems can provide. In over 30
years of DB work, I have yet to see such neutrality actually work. It is
a pipe dream.
--
Tim Cross
think I need some somewhat coordinated sequences", and poke at
> what the *real* requirement is there, and why someone thinks that the
> values should be "somewhat coordinated." Something seems off there.
I agree and was going to write something similar. All the 'solutions'
are problematic in one way or the other and seem to be due to a
misconception about the role for sequences or some requirement which
needs to be re-examined.
--
Tim Cross
aded
> as "2019-07-10 10:56:43.21" (change in Year).
>
> What could be the issue? we tried changing the default value to
> "localtimestamp".
>
My bet would be you have some SQL statements which include a value for
'createddate', so the default is not being used.
--
Tim Cross
IDs coming from? I sometimes find this a sign
you could be re-structuring your query to be a join between two tables
where one table contains the IDs of interest rather than trying to embed
them into the query as part of a where clause.
--
Tim Cross
omplex system. I've yet to find a good solution to that issue. It is
probably something which needs to be built into a tool. In the past,
I've used a modified sqitch approach that also maintains a small 'dbadm'
schema containing metadata to track dependencies. Although this worked
OK, especially if you understood how all the bits fit together, it still
had many corner cases and to some extent highlighted the complexities involved.
--
Tim Cross
from being in a location which makes backup,
restoration and migration easy/fast, where changes can be tracked, analysed and
rolled
back, where re-factoring can use advanced tools and can work across
projects, not just on a single script, where code sharing is easy and
where people can use their preferred tool rather than being forced to
use something which understands the database.
--
Tim Cross
Hemil Ruparel writes:
> Exactly my point. We need to raise the bar of the behavior we tolerate.
> This should not be tolerated. We need to set an example. The person in
> question clearly understood english and I have never seen a person who
> could use mailing lists but not google. So that's o
eable can save hours of
googling and wading through ill informed and misguided advice.
--
Tim Cross
When the error occurred, he would take a screen shot of his window, send
it to the printer, wait for the printer to send back a PDF and then send
the issue with the PDF attached.
He was amazed when I showed him all he needed to do was highlight the
error message, copy it and paste it into the message. This guy was one
of the senior developers on the team.
I switched employers a few weeks later.
--
Tim Cross
ut I mostly read my Mails on a Mac,
> sometimes Windows, but never Linux. I have no mail access on Linux. At home I
> use Macs and at work I (have to :-() use Windows as desktops. So textmail is
> not an option for me.
>
There is nothing stopping you from using a text mail program, like mutt,
on macOS.
--
Tim Cross
Marc Millas writes:
> Hi,
>
> I know its quite general. It is as I dont know what approaches may exist.
>
> Requirement is extremely simple: Is there anyway, from a running postgres
> standpoint, to be aware that a ransomware is currently crypting your data ?
>
> answer can be as simple as: whe
gn.
Anything you try to bolt on now is likely to be complex and have
significant performance impact and that is assuming you can re-interpret
the requirement to make the objective feasible.
--
Tim Cross
Karthik K writes:
> exactly, for now, what I did was, as the table is already partitioned, I
> created 50 different connections and tried updating the target table by
> directly querying from the source partition tables. Are there any other
> techniques that I can use to speed this up? also whe
cause it to die).
>
A signal 9 typically means something is explicitly killing processes. I
would check your system logs in case something is killing processes due
to running out of some resource (like memory). If it is a fairly recent
Debian system, journalctl might be useful for checking.
--
Tim Cross
ANSI compliance. Still, that
can be very useful.
See https://github.com/lighttiger2505/sqls for one example of an LSP
server for SQL and https://microsoft.github.io/language-server-protocol/
for more background on LSP and what it can provide. Many editors,
including VSCode, VI, Emacs, TextMate etc now have some support for LSP.
--
Tim Cross
David G. Johnston writes:
> On Tue, Feb 16, 2021 at 4:28 PM Tim Cross wrote:
>
>>
>> David G. Johnston writes:
>>
>> > On Tue, Feb 16, 2021 at 3:43 PM Ron wrote:
>> >
>> >>
>> >> How does one go about syntax checking this?
Ron writes:
> On 2/16/21 5:44 PM, Tim Cross wrote:
>> Given the number, I think I would do the same. A good example of why
>> being 'lazy' can be a virtue. Faster and easier to write a procedure to
>> generate dynamic SQL than write out all those alter statemen
g. Stick it in your bin directory
and then call that instead of psql directly. Now all your SQL scripts
are just DDL/DML statements. Nice thing is you can do it in whatever
scripting language your most comfortable with - bash, perl, ruby,
python, whatever and it is available for whatever project your working
on.
--
Tim Cross
and sufficient for what I
need. As usual, it is more about requirements than brand and choosing
the right tool for the right job.
Tim
--
Tim Cross
(...);
is valid syntax. You don't need to do a full "insert into blah ()
values (...)" for each insert.
2. If it really is an insert you want to do and you already have the
data in a file e.g. CSV or similar, then you can use the \copy command
to process the file, which is very fast.
3. Is it really insert or update you need?
--
Tim Cross
per needs guidance in this area.
I also would argue that the PUBLIC schema is not in itself a security
risk. The problem is with inappropriate use of that schema. It depends
heavily on how the database is used. A database used for a single
application has a completely different security and risk profile from a
database used by multiple users for different applications. Arbitrary
rules such as 'you won't use PUBLIC' are almost always wrong and often
just make both developer and dba lives more complicated and harder to
maintain. Complexity is where things go wrong and where security tends
to break down.
Rather than requiring the developer to use a specific schema, I would
1. Ask them why they believe they have to use the PUBLIC schema
2. If the need to use the PUBLIC schema is confirmed, then work with the
developer to understand what the access requirements are and develop an
appropriate model.
3. If there is no dependency on using the PUBLIC schema, work with the
developer to assist them to resolve there access issues.
Depending on the size of the organisation and complexity of the
environment, choice of libraries and modules is not always as
straight-forward. It may not be easy to switch to another library/module
with better support for schemas etc or even to upgrade to a new
version. Often, such changes will need to be managed in stages and over
time. Work with the developers as complex environments will frequently
require a united voice in order to get changes approved or prioritised.
Tim
--
Tim Cross
how permissions are granted, then you
should be able to revoke them effectively.
Tim
--
regards,
Tim
--
Tim Cross
;
> I guess most people do not store Blobs in PostgresSQL.
>
> Where do you store Blobs?
>
> (In my case Blobs are PDF/image files with size up to 20 MByte.
> I do not talk about very big blobs which are several hundret MByte)
--
Tim Cross
I've seen in pgAdmin4, I suspect it will get
to a usable and stable state eventually and will likely be a pretty good
replacement for pgAdmin3. However, currently, I find it still a little
too unstable.
Personally, I'm pleased I spent the time to get my Emacs and psql
integration working to the point that I do 90% of what I need in psql
--
Tim Cross
orrible to work with, but that is a
different issue.
There are some bloody awful Java applications out there, but this really
means, assess on a per app basis, not a blanket ban on all of
them. There are insecure and poorly written apps in every language.
Tim
--
Tim Cross
hat
> would be a good choice for PG version ?
>
> Also, since the server will be a VM, are there any special
> recommendations/suggestions might I forward in the request (install
> options, tuning options, other) ?
>
> Thanks !
--
Tim Cross
1 - 100 of 125 matches
Mail list logo