Andy Hartman wrote:
> How thru a simple query can I make sure data matches and I can display it
>
> On Thu, Jan 9, 2025 at 3:16 PM Andy Hartman wrote:
>
> > I have migrated over a Table from Mssql that had an Image column I now
> > have it in Postgres
Within psql, the bytea field can b
Daniel Frey wrote:
> I tried to understand the interaction between Pipeline Mode and
> Single Row Mode / Chunked Rows Mode. It seems that it works
> sometimes, but I don't get the feeling that it was deliberately
> designed to work consistently
It's supposed to work, and there are regress
Stijn Sanders wrote:
> From what I notice using LibPQ, it appears a query needs to complete
> before resulting data is being transferred to the client. Please
> correct me if I'm wrong.
No, the query does not need to complete. If you run something like
"select * from bigtable" in single-r
Dominique Devienne wrote:
> Also, does the code for per-type _send() and _recv() functions
> really change across versions of PostgreSQL? How common are
> instances of such changes across versions? Any examples of such
> backward-incompatible changes, in the past?
For the timestamp types,
Dmitry O Litvintsev wrote:
> Just want to make clear (sorry I am slow on uptake). I should first
> REINDEX and then ALTER DATABASE xxx REFRESH COLLATION VERSION, or
> first ALTER and then REINDEX or does the order of these action
> matter at all?
The order does not matter. The ALTER DATAB
Dmitry O Litvintsev wrote:
> WARNING: database "xxx" has a collation version mismatch
> DETAIL: The database was created using collation version 2.17, but the
> operating system provides version 2.34.
> HINT: Rebuild all objects in this database that use the default collation
> and run
Alban Hertroijs wrote:
> 1). The first function has as a drawback that it changes the time zone for
> the entire transaction (not sufficiently isolated to my tastes)
But if you add in the function declaration
SET timezone TO 'Europe/Amsterdam'
like in your 2nd function, or simply
S
arun chirappurath wrote:
> Do we have any scripts that create a temp table with column names
> from the first row of csv files?
csvkit [1] does that.
[1] https://csvkit.readthedocs.io/en/latest/
Best regards,
--
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite
Igniris Valdivia Baez wrote:
> 3. After the revision the data is loaded to the destiny database and
> here is were I believe the issue is, because the data is reviewed in
> Windows and somehow Pentaho is not understanding correctly the
> interaction between both operating systems.
On Wind
Igniris Valdivia Baez wrote:
> hello, thank you for answering, it's not a typo, in the attachments
> you can see that this is actually my collation, algo a pic of the
> problem for more clarification,
This character is meant to replace undisplayable characters:
From https://en.wikipedia.
Jeff Gerbracht wrote:
> It would be great if there was a way to set exceptions to the 'word'
> delimiter list used in the INITCAP() function.For example, I have
> hyphenated words like blue-green or possessives and contractions like
> don't and cat's tail
> These become Blue-Green, Don
Bryn Llewellyn wrote:
> 2023-02-20 12:42:44.993 PST [2540504] d0$u0@d0 LOG: 0: statement:
> insert into s.t(v) values(17); insert into s.t(v) values(42);
> 2023-02-20 12:42:44.993 PST [2540504] d0$u0@d0 LOCATION: exec_simple_query,
> postgres.c:971
>
> It seems a bit odd that psql h
Karsten Hilbert wrote:
> The database encoding is UTF8. That br_FR@euro.LATIN9 had
> _not_ been added manually. It is also not actively used in my
> database(s).
br_FR@euro.LATIN9 cannot be used actively in an UTF-8 database
because it's for a different encoding than the database.
It was
Karsten Hilbert wrote:
> Which is why my question still stands: does the above
> three-strikes operation safely take care of any collation
> issues that may currently exist in a database ?
For the indexes, yes, but theorically, all constraints involving collatable
types need a recheck.
F
Dominique Devienne wrote:
> PostgreSQL bytea is much better and simpler, except limited to 1GB...
> Bytea also has no direct random access, except via substr[ing], but
> how efficient and "random access" is that?
Bytea contents are compressed before being sliced (in chunks of
TOAST_MAX_CH
Dominique Devienne wrote:
> the fact the lo table is unique for the whole database would allow
> users to see blobs from any schema, as I understand it.
Direct access to pg_largeobject is only possible for superusers.
If lo_compat_privileges is on, any user can read any large
object with
Laurenz Albe wrote:
> > Or is there a libpq API on PGresult that would allow to get the type
> > of statement the result is from?
>
> The command tag is not what you look at.
Yet that's what psql does. from PrintQueryResult():
/* if it's INSERT/UPDATE/DELETE RETURNING, also prin
jian he wrote:
> ./configure --with-perl --with-python --with-icu
> ICU_CFLAGS='-I/usr/include/unicode' ICU_LIBS='-L/usr/lib/icu'
> --enable-debug --with-pgport=5440
The location with -L is not sufficient in ICU_LIBS.
The list of libraries to link with should be put as well.
For instanc
WR wrote:
> First run worked.
> Second run worked.
> Then I changed to SET standard_conforming_strings = off;
> Third run worked.
> Fourth run throw the error
> Then I changed back to SET standard_conforming_strings = on;
> Fifth run throw the error too.
> And only adding E and second bac
WR wrote:
> But one question is left.
>
> When I read this valid data into a c++ std::string (and I checked that
> the single backslashes are still there). Why can't I put this
> SQL-command to a pqxx-transaction and execute it. It looks like the
> pqxx-transaction unescapes the bytea-
Jim McNamara wrote:
> The package doesn't contain the sql or control file in
> /usr/share/pgsql/extensions. It did install .so files though. I ran sudo
> dnf repoquery -l package name and looked at the files.
It's probably this library:
http://gawkextlib.sourceforge.net/pgsql/gawk-pgsql.h
jian he wrote:
> So which website can I get the info about the "kf" and "u".
https://unicode.org/reports/tr35/#u_Extension
https://www.unicode.org/reports/tr35/tr35-collation.html#Collation_Settings
Best regards,
--
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite
Dominique Devienne wrote:
> so I can easily do that "\n" encoding myself, as a post-processing on
> the buffer I get back.
Alternatively, it might be easier to use the default TEXT format of
COPY rather than CSV, as the TEXT format already produces \n for
line feeds, along with half a d
Dominique Devienne wrote:
> These values are 'normal'. I'm not use to CSV, but I suppose
> such newlines
> must be encoded, perhaps as \n, since AFAIK CSV needs to be 1 line per row,
> no?
No, but such fields must be enclosed by double quotes, as documented
in RFC 4180 https://datatracker
Matthias Apitz wrote:
> I wanted to look into a column of type varchar converting the content
> with ::bytea to a hex string, but this fails when the column contains a
> backslash:
Yes, casting from text to bytea cannot be used for that.
The convert_to() function must be used instead.
Fr
Peter J. Holzer wrote:
> > My use case for such a feature are tables which contain one column (or a
> > small number of columns) which you usually don't want to select: A bytea
> > column or a very wide text column. In a program I don't mind (in fact I
> > prefer) listing all the columns e
Florents Tselai wrote:
> I have a table storing mostly text data (40M+ rows) that has
> pg_total_relation_size ~670GB.
> I’ve just upgraded to postgres 14 and I’m now eager to try the new LZ4
> compression.
You could start experimenting with data samples rather than the
full contents.
FW
Masih Tavassoli wrote:
> But there doesn't seem to be a way doing it in postgres.
> Has anyone got any suggesions?
RFC 7519 says:
A JWT is represented as a sequence of URL-safe parts separated by
period ('.') characters. Each part contains a base64url-encoded
value. The number
Hamann W wrote:
> in unicode letter ä exists in two versions - linux and windows use a
> composite whereas macos prefers
> the decomposed form. Is there any way to make a semi-exact match that
> accepts both variants?
Aside from normalizing the strings into the same normal form
before com
(resent to the list, previous post was rejected)
Pavel Stehule wrote:
> > *That* is a function of how Postgres set returning functions work, and not
> > specific to crosstab(). It is not easily fixed. Patches to fix that would
> > be
> > welcomed!
> >
>
> https://www.postgresql.org/messag
Thomas Kellerer wrote:
>arthur=> \set AUTOCOMMIT off
Alternatively, start an explicit transaction block with BEGIN.
The point is that the lifespan of the cursor is the transaction block
in which it's instantiated.
>arthur=> select * from get_results();
>get_results
>
David Gauthier wrote:
> I can avoid the error by just throwing a namespace in there...
> atletx7-reg036:/home/dgauthie[ 214 ] --> setenv PGOPTIONS "-c
> 'os.user=$USER' "
> But once in, "show os.user" is undefined.
It's documented to work [1], but you need to remove these single
quotes. F
Mohamed Wael Khobalatte wrote:
> We lock the table as a precaution, with the understanding that we are
> undergoing a "small" downtime to finish replacing the int id by the new
> bigint
Ah, sorry I overlooked that most row updates are done
pre-transaction in a preliminary step:
/* in ba
Mohamed Wael Khobalatte wrote:
> > alter table change_seq alter COLUMN id set data
> > type bigint;
> This is significant downtime, since it locks exclusively, no? We want to
> avoid that.
Well, in the steps you mentioned upthread, the transaction starts by
doing LOCK TABLE some_table, s
TALLURI Nareshkumar wrote:
> [0]postgres@axmdevhkdb008$ [PHKGAXMD008] psql
> psql: FATAL: database "postgres" does not exist
That's not necessarily a problem. The "postgres" database is generally
empty and some installations don't even have one.
Use "psql -d template1" instead, or s
Jim Hurne wrote:
> "Daniel Verite" wrote on 06/22/2020 06:00:37 PM:
> > If you can arrange a maintenance window, a faster way to rebuild
> > pg_largeobject when it contains mostly empty pages can be to:
>
> Thanks Daniel. If we cannot arrange for a
Jim Hurne wrote:
> We are of course going to continue to try different things, but does
> anyone have any other suggestions on what we should be looking at or what
> settings we might want to adjust?
If you can arrange a maintenance window, a faster way to rebuild
pg_largeobject when it
Thorsten Schöning wrote:
> > The caveat you mention about IF NOT EXISTS does not apply to
> > temporary tables, as they're not shared across sessions.[...]
>
> That's what I understood as well, but I'm creating those concurrently
> WITHIN one and the same session and transaction. :-)
Bu
Thorsten Schöning wrote:
> I'm using "CREATE TEMP TABLE IF NOT EXISTS [...] AS [...]" and the
> associated queries can take a long time. So the following lists some
> questions about executing those concurrently, even thouzgh I've
> already read threads like the following:
> > The bottom
Adrian Klaver wrote:
> On 4/5/20 9:46 AM, Adrian Klaver wrote:
> > On 4/5/20 5:50 AM, arden liu wrote:
>
> > 4) I don't see anything wrong the statements, so I am wondering if it is
> > a shell issue?
>
> Seems to be. I removed the RETURNING *_id from the INSERT statements and
> the fi
pabloa98 wrote:
> When I have a medium number of sequence I will report how it behaves. It
> will take some time though.
Be aware that creating the sequences on the fly has the kind of race
condition that you wanted to avoid in the first place.
For instance consider this execution in two
srikkanth wrote:
> Can you please help me in writing the syntax for the below mentioned
This looks like an UNPIVOT operation.
Here's a generic method that does this without having to specify the
columns individually, with the help of json functions:
SELECT ID, key, value FROM
(SELECT
Richard van der Hoff wrote:
> synapse=> select count(*), room_id, type, state_key from
> current_state_events group by 2,3,4 order by count(*) DESC LIMIT 2;
> count | room_id | type | state_key
> ---+---+---
Richard van der Hoff wrote:
> So, question: what could we be doing wrong to get ourselves into this
> situation?
OS/libc upgrades without reindexing come to mind.
See https://wiki.postgresql.org/wiki/Collations
> * At least one user reports that he has recently migrated his database
>
PegoraroF10 wrote:
> I have a
> Replace(Replace(Replace(Replace($$Text,with;On'It"$$,',','chr(59)'),';','chr(44)'),'"','chr(34)'),,'chr(39)')
> It works but I would like to call just one Replace.
For performance, it might be the fastest method, despite the
lack of elegance and the mul
Tom Lane wrote:
> regression=# select regexp_split_to_array('junkfoolbarfoolishfoobarmore',
> 'foo|bar|foobar');
> regexp_split_to_array
> ---
> {junk,l,"",lish,more}
> (1 row)
>
> The idea would be to iterate over the array elements, tracking the
> corresponding posi
Tom Lane wrote:
> I'd try forcing the match to be the whole string, ie
>
> ^(.*?)(foo|bar|foobar)(.*)$
>
> which would also save some work for restarting the iteration,
> since you'd have already captured the all-the-rest substring.
In that case regexp_matches will return 0 or 1
Hi,
When looking into how to implement a global replace of multiple
substrings (each with their own replacement) in sql
or plpgsql, I'm wondering if/how an RE with an alternation
can be used.
The basic idea is to iterate on the rows produced by
regexp_matches(string, '(.*?)(foo|bar|foobar)'
Shalini wrote:
> Could you also please state the reason why is it happening in case
> of large objects? Because concurrent transactions are very well
> handled for other data types, but the same is not happening for
> lobs. Is it because the fomer are stored in toast table and there is
> n
Shalini wrote:
> > Is there a workaround to this concurrency issue without creating a
> > new large object?
The transaction failing with the "Tuple concurrently updated"
error could be resubmitted by the client, as if it was a
serialization failure.
Or the failure could be preve
stan wrote:
> BUT, I went to the directory where I have the exentsion's source, did a
> make clean ; make ; make install, and the files were still installed in the
> V11 tree. How can I instruct the system to put these in the V12 tree?
With the Debian packaging, /usr/bin/pg_config is a sh
Jeff Lanzarotta wrote:
> I have a question about nondeterministic collations in PostgreSQL 12. I
> have created a new collation that is nondeterministic and created several
> columns which use this collation. Querying these columns works great until
> I use LIKE. When I do, I get the fo
raylu wrote:
> to_tsvector('simple', 'UVW-789-XYZ') is
> 'uvw':1 '-789':2 'xyz':3
> because -789 is a negative integer. If we turn the query '789-XYZ'
> into the tsquery as before, we get to_tsquery('simple', '789 <-> xyz')
> which doesn't match it.
>
> Are we missing something here? Is t
Christoph Moench-Tegeder wrote:
> And then it doesn't know that your terminal expects UTF-8 (perl
> just dumps the binary string here), because you didn't tell it:
> "binmode(STDOUT, ':encoding(utf8)')" would fix that.
Or use perl -C, so that it gets that from the environment.
From http
Igal Sapir wrote:
> > Out of curiosity is there a eason not to use the citext type for th?
> >
> >
> Using the collation seems like a much cleaner approach, and I trust ICU to
> do a better job at comparing strings according to language rules etc.
One notable difference between citext and
Arnaud L. wrote:
> Anyway, it hung using this syntax during last night's run.
> I'll give it another try tonight just to be sure.
When psql.exe is hanging, maybe you could use a tool like
Process Monitor [1] or Process Explorer [2] to get insights about
what it's stuck on or what it's do
Arnaud L. wrote:
> As a side note, COPY (...) TO STDOUT \g 'somefile' does not work in a
> script file (i.e. it does not work if the command is passed in a file
> via the -f argument).
> The command runs fine, no error is raised either by the client or the
> server, but no file is writt
Thomas Kellerer wrote:
> > To get a case insensitive collation you'd have to use something like
> >
> > LOCALE = 'de-DE-u-ks-level2'
>
> Creating works, but apparently on Windows ICU does not support this.
After installing v12 on windows with the EDB installer, I notice
that it ship
Thomas Kellerer wrote:
> It also works on Windows when I specify "correct" locale names - the above
> seems to be an edge case.
> Is it worth the effort to report that through the bug reporting form?
Sure. Both the crash with 'de-x-icu' and the difference in behavior
between Linux and Win
Marco Ippolito wrote:
> (base) postgres@pc:~$ psql --cluster 11/fabmnet -h localhost
> Password for user postgres:
> psql: FATAL: password authentication failed for user "postgres"
> FATAL: password authentication failed for user "postgres"
Did you set a password for the postgres user i
Cory Nemelka wrote:
> ERROR: 22023: could not create locale "fr_FR.utf8": No such file or
> directory
> DETAIL: The operating system could not find any locale data for the locale
> name "fr_FR.utf8".
> LOCATION: report_newlocale_failure, pg_locale.c:1312
I can reproduce this by creatin
nikhil raj wrote:
> /usr/lib/postgresql/11/bin/psql -p 5432 -U postgres -d $DBLIST -c "GRANT
> CONNECT ON DATABASE \"$DBLIST\" TO cpupdate"
> ERROR: database " CDF_100_1313" does not exist
That's because you don't use the unaligned format to get the
list of results. You should add -A to
Peter J. Holzer wrote:
> 2) Why does psql need to read postgresql.conf, and more specifically,
> why does it care about the location of the data directory? It
> shouldn't access files directly, just talk to the server via the
> socket.
It's not psql itself, it's pg_wrapper.
$ ls -l
Bryn Llewellyn wrote:
> B.t.w., I’m guessing that the “begin” SQL command that you see in the log
> that I mentioned is actually issued by (some) clients—at least psql and
> Python-on-psycopg2—as an explicit call from the client. In other words, it
> isn’t the server that generates this. D
Igor Korot wrote:
> It is not a problem in general, but just curious - is there a more
> generic solution (to get the file from the client)?
With psql:
\lo_import /path/to/file
It creates a large object with the contents of the file from the client
file system and returns its unique ID
Paul Malm wrote:
> C:\Users\an\bin>C:\Tmp\psql -h 10.211.43.22 -p 5432 -U postgres -d aatest -w
> -c "\copy public.""Bayern"" FROM 'C:\Users\an\test\Bayern.csv' WITH
> DELIMITER ';' CSV"
> Error: extra data after expected last column
> CONTEXT: COPY Bayern, row 1:
> "48.456;
Prakash Ramakrishnan wrote:
> which code
> will be support for postgresql 11 can you please share me the link or file.
It appears indeed from the error messages that you're trying
to compile an old version.
The lastest version here:
git://git.postgresql.org/git/pldebugger.git
does compil
rihad wrote:
> On 05/03/2019 05:35 PM, Daniel Verite wrote:
> > For non-English text, I would recommend C.UTF-8 over "C" because of
>
> BTW, there's no C.UTF-8 inside pg_collation, and running select
> pg_import_system_collations('pg_catalog
rihad wrote:
> Thanks, I'm a bit confused here. AFAIK indexes are used for at least two
> things: for speed and for skipping the ORDER BY step (since btree
> indexes are already sorted). Will such an "upgrade-immune" C.UTF-8 index
> still work correctly for table lookups?
If the lookup
rihad wrote:
> Thanks for the reply. Do you know what would a "decent" ICU collation be
> to bind to a field's schema definition so it would mimic a UTF-8
> encoding for a multilingual column? Maybe und-x-icu? We aren't as much
> concerned about their sortability in most cases, we just
Hi,
I've noticed this post being currently shared on social media:
https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/cve-2019-9193-authenticated-arbitrary-command-execution-on-postgresql-9-3/
The claim that COPY FROM PROGRAM warrants a CVE seems groundless
because you need to be
Mihalidesová Jana wrote:
> nipjd=> select distinct encode(serializable_value, 'escape') from
> alf_node_properties_zaloha where serializable_value is not null;
>
> encode
> ---
wrote:
> When I save that Excel as a tab delimited text file, I get this:rec_no
> item1item2item3item4item5
> 1Denny'sorange juice"1,500 yen""""Dear John""""32""
> TV"(As seen when I opened that file with Notepad)
This looks good. Fields are prope
Jean-Marc Lessard wrote:
> Another area where LOB hurts is the storage. LOB are broken and stored in 2K
> pieces.
> Due to the block header, only three 2k pieces fit in an 8k block wasting 25%
> of space (in fact pgstattuple reports ~ 20%).
Yes. bytea stored as TOAST is sliced into pieces
Adrian Klaver wrote:
> > So there's no way it can deal with the contents over 500MB, and the
> > ones just under that limit may also be problematic.
>
> To me that looks like a bug, putting data into a record you cannot get out.
Strictly speaking, it could probably get out with COPY in b
Charles Martin wrote:
> but the second one returned this:
>
> 0 "623140"
> 1 "53"
> 2 "12"
> 3 "10"
> 4 "1"
> 5 "1"
> 7 "1"
> [null] "162"
>
> Not quite sure what that means, but if there is just a small number of
> overly-large records, I might be able to delete them. If I can find the
Charles Martin wrote:
> SELECT max(length(docfilecontents::text)) FROM docfile;
> and after a very long time, got:
> ERROR: invalid memory alloc request size 1636085512 SQL state: XX000
It would mean that at least one row has a "docfilecontents"
close to 0.5GB in size. Or that the size fi
Charles Martin wrote:
> >So where is the server located relative to the pg_dump client?
> >On the same machine?
> >If so is it a virtual machine e.g AWS?
> >Across a local or remote network?
>
>
> I gave the command in a terminal session after SSHing to the server from
> the same network
Mark Williams wrote:
> Can anyone please shed any light on what more I need to do?
Overall, what you did looks good.
I've never used IIS, but when faced with similar problems with Apache,
ProcessExplorer [1] has been quite effective to figure out which
dependent DDLs were missing, where P
Naveen Dabas wrote:
> Creating sample schema sampledb1
> DBD::Pg::db do failed: ERROR: cannot execute CREATE SCHEMA in a read-only
> transaction at ./pg_sample line 296.
Maybe you ran this on a hot standby? It can't work because
this program needs to write data into the database,
which i
TalGloz wrote:
> Do I have to replace my -shared in the link command with -fPIC?
No, but -fPIC should go into your CXXFLAGS.
The pgxs makefile handles CFLAGS, but as it doesn't do C++,
you're on your own for CXXFLAGS.
Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.
TalGloz wrote:
> If yes then the -lseal is added with the $(LDFLAGS) at the end of the
> command.
But it doesn't happen because LDFLAGS is overriden by the
makefile included just after you set it. The relevant part
copy-pasted from your mail:
LDFLAGS = -L$(INCLUDE_SEAL_LIB) -llibseal.
Adrien NAYRAT wrote:
> On 08/08/2018 04:09 PM, czezz wrote:
> > Hi everyone,
> > I want to aks if anyone knows is there a way to search for specific
> > "value" throughout list of tables OR all tables in databse?
> >
> > Cheers,
> > czezz
>
> Hello,
>
> Maybe this article by Daniel cou
Alexander Shutyaev wrote:
> Is there any hope the issue with pg_upgrade can be resolved? If not,
> could you give me some hints as to how can I decrease time needed
> for pg_dumpall | psql?
Not sure about the larger problem, but for the part about having more
than 1 million large objects
Andres Freund wrote:
> I'm not entirely clear why pg_restore appears to use a separate
> transaction for each large object, surely exascerbating the problem.
To make sure that per-object locks don't fill up the shared
lock table?
There might be hundreds of thousands of large objects.
If i
Paul Linehan wrote:
> Now, I've installed the icu libraries using the classic ./configure, make
> and sudo make install.
So it got installed under /usr/local
> collationcmds.c:(.text+0xe36): undefined reference to `uloc_getAvailable_61'
> collationcmds.c:(.text+0xe5b): undefined referenc
Alexander Farber wrote:
> Here is such a query for the best player
>
> # SELECT AVG(score) FROM words_moves WHERE uid = 1201;
> avg
> -
> 18.4803525523319868
>
> However I am not sure, how to "marry" the 2 queries?
>
> I have tried to add words_moves through
Đỗ Ngọc Trí Cường wrote:
> I want to export it to a file in JSON format so I run the query as below:
> COPY (SELECT...) TO '/home/postgres/test1.json'
COPY TO applies its own format rules on top of the data, which include
among other things, quoting the newline characters. What you seem
t
Blake McBride wrote:
> data.sql is an untouched dump of a database from PostgreSQL 9.5.12.
>
> The import seems to have worked. I just need another \c after the \i.
An effect of reconnecting is to reset the search_path to what
it was before including your dump file.
You're most certainl
Thiemo Kellner, NHC Barhufpflege wrote:
> > Why you don't create query like
> >
> > EXECUTE 'SELECT xxx FROM TAB WHERE A = $1.x AND B = $1.y' USING NEW;
>
> I shall try. This would be the direct way, but I doubt the placeholder
> $1 can be a record.
It could be written without referin
PegoraroF10 wrote:
> Another approach to solve my problem would be a function that receives a
> dynamic SQL, runs it and returns a XML or JSON and on client side I convert
> that XML back to a recordset. Is that possible ?
Yet another tool that can be handy to transfer polymorphic
results
Magnus Hagander wrote:
> So do you have any suggestions for actually fixing that? Given that we have
> more lists to migrate, if you can figure out a way to make those changes
> without peoples filters not matching, we'd be happy to hear it..
I guess it's a bit late at this point, but in
92 matches
Mail list logo