some questions (/bug?) about commons-vfs2 make me confused.

2020-01-19 Thread Xeno Amess
I'm trying to migrate to commons-vfs2 now
severial things I found not quite right / amazing.

1.
 I tested version 2.6.0 and 2.5.0, and I just start at
VSF.getManager() (of cause I have no additional contfigure or
something)

It said class not
found:org.apache.commons.vfs2.provider.webdav.WebdavFileProvider

And I looked into your binary jars I get from maven central (2.6.0).

they really do not have that class WebdavFileProvider.
(even not found that package org.apache.commons.vfs2.provider.webdav)

And after I downgrade to 2.3 (I really wonder why 2.3 not 2.3.0 but
that is not important)
It can run now.(and never tell me class not found again)
I dont't want to try 2.4.0. Really bad connection here(I'm in a villige now).
All I get is:
2.6.0, broken.
2.5.0, broken.
2.3, fine.

According to the file on github, it said it might be deprecated, so I
wonder if you already deprecate d it and you just forgotten it?

 btw, according to your webpage https://commons.apache.org/proper/commons-vfs/
there even do not exist 2.6.0
But there be a 2.6.0 in maven central.
really make me confused.

2.
for using commons-vfs2 I downgrade slf4j from 2.0.0alpha to 1.7.30
We all know slf4j's author really do not care about backward
maintenance or something.
His codes are never able to migrate.
even though, will there be some plan about using reflect or something
to make vfs2 CAN suit slf4j 2.0?

3.
for some reason I need to deal with relative file path.
Is there any guide about using relative file path in vfs2?

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: some questions (/bug?) about commons-vfs2 make me confused.

2020-01-19 Thread Xeno Amess
Right now I'm using something like
this to deal with relative files.
But I just think there might be a more elegant way...

fileSystemManager = new
org.apache.commons.vfs2.impl.StandardFileSystemManager();
fileSystemManager.setLogger(null);
try {
fileSystemManager.init();
fileSystemManager.setBaseFile(new File(""));
} catch (FileSystemException e) {
e.printStackTrace();
}

Xeno Amess  于2020年1月19日周日 下午6:08写道:
>
> I'm trying to migrate to commons-vfs2 now
> severial things I found not quite right / amazing.
>
> 1.
>  I tested version 2.6.0 and 2.5.0, and I just start at
> VSF.getManager() (of cause I have no additional contfigure or
> something)
>
> It said class not
> found:org.apache.commons.vfs2.provider.webdav.WebdavFileProvider
>
> And I looked into your binary jars I get from maven central (2.6.0).
>
> they really do not have that class WebdavFileProvider.
> (even not found that package org.apache.commons.vfs2.provider.webdav)
>
> And after I downgrade to 2.3 (I really wonder why 2.3 not 2.3.0 but
> that is not important)
> It can run now.(and never tell me class not found again)
> I dont't want to try 2.4.0. Really bad connection here(I'm in a villige now).
> All I get is:
> 2.6.0, broken.
> 2.5.0, broken.
> 2.3, fine.
>
> According to the file on github, it said it might be deprecated, so I
> wonder if you already deprecate d it and you just forgotten it?
>
>  btw, according to your webpage https://commons.apache.org/proper/commons-vfs/
> there even do not exist 2.6.0
> But there be a 2.6.0 in maven central.
> really make me confused.
>
> 2.
> for using commons-vfs2 I downgrade slf4j from 2.0.0alpha to 1.7.30
> We all know slf4j's author really do not care about backward
> maintenance or something.
> His codes are never able to migrate.
> even though, will there be some plan about using reflect or something
> to make vfs2 CAN suit slf4j 2.0?
>
> 3.
> for some reason I need to deal with relative file path.
> Is there any guide about using relative file path in vfs2?

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [VOTE] Release Apache Commons CSV 1.8 based on RC1

2020-01-19 Thread Alex Herbert
Hi Gary,

I raised a few niggles a while back with CSV and the discussion did not receive 
a response on how to proceed.

There is the major bug CSV-248 where the CSVRecord is not Serializable [1]. 
This requires a decision on what to do to fix it. This bug is still present in 
1.8 RC1 as found by FindBugs [2].

From what I can see the CSVRecord maintains a reference to the CSVParser. This 
chain of objects maintained in memory is not serializable and leads back to the 
original input Reader.

I can see from the JApiCmp report that the serial version id was changed for 
CSVRecord this release so there is still an intention to support serialization. 
So this should be a blocker.

I could not find a serialisation test in the unit tests for CSVRecord. This 
quick test added to CSVRecordTest fails:


@Test
public void testSerialization() throws IOException {
CSVRecord shortRec;
try (final CSVParser parser = CSVParser.parse("a,b", 
CSVFormat.newFormat(','))) {
shortRec = parser.iterator().next();
}
final ByteArrayOutputStream out = new ByteArrayOutputStream();
try (ObjectOutputStream oos = new ObjectOutputStream(out)) {
oos.writeObject(shortRec);
}
}

mvn test -Dtest=CSVRecordTest

[ERROR] testSerialization  Time elapsed: 0.032 s  <<< ERROR!
java.io.NotSerializableException: org.apache.commons.csv.CSVParser
at 
org.apache.commons.csv.CSVRecordTest.testSerialization(CSVRecordTest.java:235)

If I mark the field csvParser as transient it passes. So this is a problem as 
raised by FindBugs.



I also raised [3] the strange implementation of the CSVParser getHeaderNames() 
which ignores null headers as they cannot be used as a key into the map. 
However the list of column names could contain the null values. This test 
currently fails:

@Test
public void testHeaderNamesWithNull() throws IOException {
final Reader in = new StringReader("header1,null,header3\n1,2,3\n4,5,6");
final Iterator records = CSVFormat.DEFAULT.withHeader()
 .withNullString("null")
 
.withAllowMissingColumnNames()
 .parse(in).iterator();
final CSVRecord record = records.next();
assertEquals(Arrays.asList("header1", null, "header3"), 
record.getParser().getHeaderNames());
}

I am not saying it should pass but at least the documentation should state the 
behaviour in this edge case. That is the list of header names may be shorter 
than the number of columns when the parser is configured to allow null headers. 
I’ve not raised a bug ticket for this as it is open to opinion if this is by 
design or actually a bug. This issue is still present in 1.8 RC1.

Previously I suggested documentation changes for this and another edge case 
using the header map to be added to the javadoc for getHeaderNames() and 
getHeaderMap():

- Documentation:

The mapping is only guaranteed to be a one-to-one mapping if the record was 
created with a format that does not allow duplicate or null header names. Null 
headers are excluded from the map and duplicates can only map to 1 column.


- Bug / Documentation

The CSVParser only stores headers names in a list of header names if they are 
not null. So the list can be shorter than the number of columns if you use a 
format that allows empty headers and contains null column names.


The ultimate result is that we should document that the purpose of the header 
names is to provide a list of non-null header names in the order they occur in 
the header and thus represent keys that can be used in the header map. In 
certain circumstances there may be more columns in the data than there are 
header names.


Alex


[1] https://issues.apache.org/jira/browse/CSV-248 


[2] 
https://dist.apache.org/repos/dist/dev/commons/csv/1.8-RC1/site/findbugs.html 


[3] https://markmail.org/message/woti2iymecosihx6 




> On 18 Jan 2020, at 17:52, Gary Gregory  wrote:
> 
> We have fixed quite a few bugs and added some significant enhancements
> since Apache Commons CSV 1.7 was released, so I would like to release
> Apache Commons CSV 1.8.
> 
> Apache Commons CSV 1.8 RC1 is available for review here:
>https://dist.apache.org/repos/dist/dev/commons/csv/1.8-RC1 (svn
> revision 37670)
> 
> The Git tag commons-csv-1.8-RC1 commit for this RC is
> c1c8b32809df295423fc897eae0e8b22bfadfe27 which you can browse here:
> 
> https://gitbox.apache.org/repos/asf?p=commons-csv.git;a=commit;h=c1c8b32809df295423fc897eae0e8b22bfadfe27
> You may checkout this tag using:
>git clone https://gitbox.apache.org/repos/asf/commons-csv.git --branch
> commons-csv-1.8-RC1 commons-csv-1.8-RC1
> 
> Maven artifacts are here:
> 
> https://repository.apache.org/content/repo

Re: some questions (/bug?) about commons-vfs2 make me confused.

2020-01-19 Thread Rob Spoor
The class was there in release 2.4.1: 
https://github.com/apache/commons-vfs/blob/rel/commons-vfs-2.4.1/commons-vfs2/src/main/java/org/apache/commons/vfs2/provider/webdav/WebdavFileProvider.java. 
In the next release, 2.5.0, it can indeed no longer be found. A bit of 
investigating showed that the webdav classes got moved to a new 
artifact: 
https://github.com/apache/commons-vfs/commit/42ff473acbb5363b88f5ab3c5fddbae7b206c1d2


That means you can still use it, you just need to include an extra 
dependency: 
https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit1/2.6.0


There's apparently also a Jackrabbit 2 version available: 
https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit2/2.6.0



On 19/01/2020 11:24, Xeno Amess wrote:

Right now I'm using something like
this to deal with relative files.
But I just think there might be a more elegant way...

fileSystemManager = new
org.apache.commons.vfs2.impl.StandardFileSystemManager();
fileSystemManager.setLogger(null);
try {
 fileSystemManager.init();
 fileSystemManager.setBaseFile(new File(""));
} catch (FileSystemException e) {
 e.printStackTrace();
}

Xeno Amess  于2020年1月19日周日 下午6:08写道:


I'm trying to migrate to commons-vfs2 now
severial things I found not quite right / amazing.

1.
  I tested version 2.6.0 and 2.5.0, and I just start at
VSF.getManager() (of cause I have no additional contfigure or
something)

It said class not
found:org.apache.commons.vfs2.provider.webdav.WebdavFileProvider

And I looked into your binary jars I get from maven central (2.6.0).

they really do not have that class WebdavFileProvider.
(even not found that package org.apache.commons.vfs2.provider.webdav)

And after I downgrade to 2.3 (I really wonder why 2.3 not 2.3.0 but
that is not important)
It can run now.(and never tell me class not found again)
I dont't want to try 2.4.0. Really bad connection here(I'm in a villige now).
All I get is:
2.6.0, broken.
2.5.0, broken.
2.3, fine.

According to the file on github, it said it might be deprecated, so I
wonder if you already deprecate d it and you just forgotten it?

  btw, according to your webpage https://commons.apache.org/proper/commons-vfs/
there even do not exist 2.6.0
But there be a 2.6.0 in maven central.
really make me confused.

2.
for using commons-vfs2 I downgrade slf4j from 2.0.0alpha to 1.7.30
We all know slf4j's author really do not care about backward
maintenance or something.
His codes are never able to migrate.
even though, will there be some plan about using reflect or something
to make vfs2 CAN suit slf4j 2.0?

3.
for some reason I need to deal with relative file path.
Is there any guide about using relative file path in vfs2?


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [geometry] Rename Transform to AffineTransform

2020-01-19 Thread Gilles Sadowski
Hi.

Le sam. 18 janv. 2020 à 23:14, Matt Juntunen
 a écrit :
>
> Gilles,
>
> >> There, we can simply sample the user-defined function
> > I'm not sure I understand.
>
> Just an implementation detail. We need to pass some sample points through the 
> user-defined function in order to construct an equivalent matrix.
>
> > Throwing an exception if the transform does not abide by
> > the requirements?
>
> Yes.
>
> I just submitted a PR on Github with these changes. I also realized that the 
> EuclideanTransform class as discussed exactly matches the definition of an 
> affine transform so I renamed it to AffineTransform. No other names were 
> changed.

I had a (quick) look; is it necessary to split functionality among "Transform"
(in "core") and its subinterfaces/classes in other modules?  IOW, if "Transform"
can only be affine, it looks strange to have "AffineTransform" (re)defined.

I'm also a bit puzzled by the "AbstractAffineTransformMatrix" that seems to
only contain convenience methods for internal use (whereas having them
"protected" put them in the public API).

Regards,
Gilles

>
> -Matt
> 
> From: Gilles Sadowski 
> Sent: Saturday, January 18, 2020 1:40 PM
> To: Commons Developers List 
> Subject: Re: [geometry] Rename Transform to AffineTransform
>
> Hello.
>
> 2020-01-18 15:40 UTC+01:00, Matt Juntunen :
> > Gilles,
> >
> >> If the "Transform" is intimately related to the "core" and there is a
> >> single
> >> set of properties that make it "affine" (and work correctly), I'd tend to
> >> keep the name "Transform".
> >
> > So, if I'm understanding you correctly, you're saying that since the
> > partitioning code in the library only works with these types of
> > parallelism-preserving transforms, it can be safely assumed that
> > o.a.c.geometry.core.Transform represents such a transform. Is this correct?
>
> Indeed.
>
> > One thing that's causing some issues with the implementation here is that
> > the Euclidean TransformXD interfaces have static "from(UnaryOperator)"
> > methods that allow users to wrap their own, arbitrary vector operations as
> > Transform instances. We don't (and really can't) do any validation on these
> > user-defined functions to ensure that they meet the library requirements. It
> > is therefore easy for users to pass in invalid operators. To avoid this, I'm
> > thinking of removing the TransformXD interfaces completely and moving the
> > "from(UnaryOperator)" methods into the AffineTransformMatrixXD classes.
>
> +1
> It is generally good to prevent the creation of invalid objects.
>
> > There, we can simply sample the user-defined function
>
> I'm not sure I understand.
>
> > as needed and produce
> > matrices that are guaranteed to be affine.
>
> Throwing an exception if the transform does not abide by
> the requirements?
>
> > Following the above, the class hierarchy would then be as below, which is
> > basically what it was before I added the TransformXD interfaces.
> >
> > commons-geometry-core
> >Transform
> >
> > commons-geometry-euclidean
> > EuclideanTransform extends Transform
> > AffineTransformMatrixXD implements EuclideanTransform
> > Rotation3D extends EuclideanTransform
> > QuaternionRotation implements Rotation3D
> >
> > commons-geometry-spherical
> > Transform1S implements Transform
> > Transform2S implements Transform
> >
> > WDYT?
>
> +1
>
> Best,
> Gilles
>
> >
> > -Matt
> >
> >
> > 
> > From: Gilles Sadowski 
> > Sent: Monday, January 13, 2020 8:03 PM
> > To: Commons Developers List 
> > Subject: Re: [geometry] Rename Transform to AffineTransform
> >
> > Hi.
> >
> > Le lun. 13 janv. 2020 à 04:39, Matt Juntunen
> >  a écrit :
> >>
> >> Gilles,
> >>
> >> > How about keeping "Transform" for the interface name and define a method
> >> > ... boolean isAffine();
> >>
> >> I would prefer to have separate types for each kind of transform.
> >> This would make the API clear and would avoid numerous checks in the code
> >> in order to see if a particular transform instance is supported. The
> >> transform types also generally have an "is-a" relationship with each
> >> other, which seems like a perfect fit for inheritance. [1]
> >>
> >> > I don't get that it is an "accuracy" issue. If some requirement is not
> >> > met,
> >> results will be plain wrong
> >>
> >> Yes, you are correct. I was not very clear in what I wrote. The results
> >> will be completely unusable if the transform does not meet the
> >> requirements.
> >>
> >> > I wonder why the documented requirement that an "inverse transform
> >> must exist" does not translate into a method ... getInverse();
> >>
> >> Good point. All current implementations are able to provide an inverse so
> >> that method should be present on the interface.
> >>
> >> In regard to renaming the Transform interface, I had another idea. The
> >> main purpose of that interface is to provide a way for the partitioning
> >> code in the core mo

Re: some questions (/bug?) about commons-vfs2 make me confused.

2020-01-19 Thread Xeno Amess
The key point is even if I do not wanna use it I must have this
class,or VFS.getManager() can never run.

IMO this type of class relationship cause the project where hold this
class must be added into vfs's pom as a dependency, or just move class
VFS into that project aswell.

Otherwise we should not let the VFS.getManager() rely on this class.

Thanks for finding this class though.

btw I tested 2.4, 2.4 is correct.

Rob Spoor  于2020年1月19日周日 下午10:00写道:
>
> The class was there in release 2.4.1:
> https://github.com/apache/commons-vfs/blob/rel/commons-vfs-2.4.1/commons-vfs2/src/main/java/org/apache/commons/vfs2/provider/webdav/WebdavFileProvider.java.
> In the next release, 2.5.0, it can indeed no longer be found. A bit of
> investigating showed that the webdav classes got moved to a new
> artifact:
> https://github.com/apache/commons-vfs/commit/42ff473acbb5363b88f5ab3c5fddbae7b206c1d2
>
> That means you can still use it, you just need to include an extra
> dependency:
> https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit1/2.6.0
>
> There's apparently also a Jackrabbit 2 version available:
> https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit2/2.6.0
>
>
> On 19/01/2020 11:24, Xeno Amess wrote:
> > Right now I'm using something like
> > this to deal with relative files.
> > But I just think there might be a more elegant way...
> >
> > fileSystemManager = new
> > org.apache.commons.vfs2.impl.StandardFileSystemManager();
> > fileSystemManager.setLogger(null);
> > try {
> >  fileSystemManager.init();
> >  fileSystemManager.setBaseFile(new File(""));
> > } catch (FileSystemException e) {
> >  e.printStackTrace();
> > }
> >
> > Xeno Amess  于2020年1月19日周日 下午6:08写道:
> >>
> >> I'm trying to migrate to commons-vfs2 now
> >> severial things I found not quite right / amazing.
> >>
> >> 1.
> >>   I tested version 2.6.0 and 2.5.0, and I just start at
> >> VSF.getManager() (of cause I have no additional contfigure or
> >> something)
> >>
> >> It said class not
> >> found:org.apache.commons.vfs2.provider.webdav.WebdavFileProvider
> >>
> >> And I looked into your binary jars I get from maven central (2.6.0).
> >>
> >> they really do not have that class WebdavFileProvider.
> >> (even not found that package org.apache.commons.vfs2.provider.webdav)
> >>
> >> And after I downgrade to 2.3 (I really wonder why 2.3 not 2.3.0 but
> >> that is not important)
> >> It can run now.(and never tell me class not found again)
> >> I dont't want to try 2.4.0. Really bad connection here(I'm in a villige 
> >> now).
> >> All I get is:
> >> 2.6.0, broken.
> >> 2.5.0, broken.
> >> 2.3, fine.
> >>
> >> According to the file on github, it said it might be deprecated, so I
> >> wonder if you already deprecate d it and you just forgotten it?
> >>
> >>   btw, according to your webpage 
> >> https://commons.apache.org/proper/commons-vfs/
> >> there even do not exist 2.6.0
> >> But there be a 2.6.0 in maven central.
> >> really make me confused.
> >>
> >> 2.
> >> for using commons-vfs2 I downgrade slf4j from 2.0.0alpha to 1.7.30
> >> We all know slf4j's author really do not care about backward
> >> maintenance or something.
> >> His codes are never able to migrate.
> >> even though, will there be some plan about using reflect or something
> >> to make vfs2 CAN suit slf4j 2.0?
> >>
> >> 3.
> >> for some reason I need to deal with relative file path.
> >> Is there any guide about using relative file path in vfs2?
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> > For additional commands, e-mail: dev-h...@commons.apache.org
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: some questions (/bug?) about commons-vfs2 make me confused.

2020-01-19 Thread Xeno Amess
OK I get where is bugged.
I will fix it and add a test for that never happen again.

Xeno Amess  于2020年1月19日周日 下午11:21写道:
>
> The key point is even if I do not wanna use it I must have this
> class,or VFS.getManager() can never run.
>
> IMO this type of class relationship cause the project where hold this
> class must be added into vfs's pom as a dependency, or just move class
> VFS into that project aswell.
>
> Otherwise we should not let the VFS.getManager() rely on this class.
>
> Thanks for finding this class though.
>
> btw I tested 2.4, 2.4 is correct.
>
> Rob Spoor  于2020年1月19日周日 下午10:00写道:
> >
> > The class was there in release 2.4.1:
> > https://github.com/apache/commons-vfs/blob/rel/commons-vfs-2.4.1/commons-vfs2/src/main/java/org/apache/commons/vfs2/provider/webdav/WebdavFileProvider.java.
> > In the next release, 2.5.0, it can indeed no longer be found. A bit of
> > investigating showed that the webdav classes got moved to a new
> > artifact:
> > https://github.com/apache/commons-vfs/commit/42ff473acbb5363b88f5ab3c5fddbae7b206c1d2
> >
> > That means you can still use it, you just need to include an extra
> > dependency:
> > https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit1/2.6.0
> >
> > There's apparently also a Jackrabbit 2 version available:
> > https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit2/2.6.0
> >
> >
> > On 19/01/2020 11:24, Xeno Amess wrote:
> > > Right now I'm using something like
> > > this to deal with relative files.
> > > But I just think there might be a more elegant way...
> > >
> > > fileSystemManager = new
> > > org.apache.commons.vfs2.impl.StandardFileSystemManager();
> > > fileSystemManager.setLogger(null);
> > > try {
> > >  fileSystemManager.init();
> > >  fileSystemManager.setBaseFile(new File(""));
> > > } catch (FileSystemException e) {
> > >  e.printStackTrace();
> > > }
> > >
> > > Xeno Amess  于2020年1月19日周日 下午6:08写道:
> > >>
> > >> I'm trying to migrate to commons-vfs2 now
> > >> severial things I found not quite right / amazing.
> > >>
> > >> 1.
> > >>   I tested version 2.6.0 and 2.5.0, and I just start at
> > >> VSF.getManager() (of cause I have no additional contfigure or
> > >> something)
> > >>
> > >> It said class not
> > >> found:org.apache.commons.vfs2.provider.webdav.WebdavFileProvider
> > >>
> > >> And I looked into your binary jars I get from maven central (2.6.0).
> > >>
> > >> they really do not have that class WebdavFileProvider.
> > >> (even not found that package org.apache.commons.vfs2.provider.webdav)
> > >>
> > >> And after I downgrade to 2.3 (I really wonder why 2.3 not 2.3.0 but
> > >> that is not important)
> > >> It can run now.(and never tell me class not found again)
> > >> I dont't want to try 2.4.0. Really bad connection here(I'm in a villige 
> > >> now).
> > >> All I get is:
> > >> 2.6.0, broken.
> > >> 2.5.0, broken.
> > >> 2.3, fine.
> > >>
> > >> According to the file on github, it said it might be deprecated, so I
> > >> wonder if you already deprecate d it and you just forgotten it?
> > >>
> > >>   btw, according to your webpage 
> > >> https://commons.apache.org/proper/commons-vfs/
> > >> there even do not exist 2.6.0
> > >> But there be a 2.6.0 in maven central.
> > >> really make me confused.
> > >>
> > >> 2.
> > >> for using commons-vfs2 I downgrade slf4j from 2.0.0alpha to 1.7.30
> > >> We all know slf4j's author really do not care about backward
> > >> maintenance or something.
> > >> His codes are never able to migrate.
> > >> even though, will there be some plan about using reflect or something
> > >> to make vfs2 CAN suit slf4j 2.0?
> > >>
> > >> 3.
> > >> for some reason I need to deal with relative file path.
> > >> Is there any guide about using relative file path in vfs2?
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> > > For additional commands, e-mail: dev-h...@commons.apache.org
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> > For additional commands, e-mail: dev-h...@commons.apache.org
> >

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: some questions (/bug?) about commons-vfs2 make me confused.

2020-01-19 Thread Rob Spoor
It seems that when the webdav support was moved to a separate artifact, 
the developers forgot to update file 
commons-vfs2/src/main/resources/org/apache/commons/vfs2/impl/providers.xml. 
This file is used by StandardFileSystemManager to load the default 
providers.


I think this warrants a fix, to move the webdav provider from this 
default providers.xml file to file 
commons-vfs2-jackrabbit1/src/main/resources/META-INF/vfs-providers.xml, 
and create the same file with the correct providers for the 
commons-vfs2-jackrabbit2 module.



On 19/01/2020 16:57, Xeno Amess wrote:

OK I get where is bugged.
I will fix it and add a test for that never happen again.

Xeno Amess  于2020年1月19日周日 下午11:21写道:


The key point is even if I do not wanna use it I must have this
class,or VFS.getManager() can never run.

IMO this type of class relationship cause the project where hold this
class must be added into vfs's pom as a dependency, or just move class
VFS into that project aswell.

Otherwise we should not let the VFS.getManager() rely on this class.

Thanks for finding this class though.

btw I tested 2.4, 2.4 is correct.

Rob Spoor  于2020年1月19日周日 下午10:00写道:


The class was there in release 2.4.1:
https://github.com/apache/commons-vfs/blob/rel/commons-vfs-2.4.1/commons-vfs2/src/main/java/org/apache/commons/vfs2/provider/webdav/WebdavFileProvider.java.
In the next release, 2.5.0, it can indeed no longer be found. A bit of
investigating showed that the webdav classes got moved to a new
artifact:
https://github.com/apache/commons-vfs/commit/42ff473acbb5363b88f5ab3c5fddbae7b206c1d2

That means you can still use it, you just need to include an extra
dependency:
https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit1/2.6.0

There's apparently also a Jackrabbit 2 version available:
https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit2/2.6.0


On 19/01/2020 11:24, Xeno Amess wrote:

Right now I'm using something like
this to deal with relative files.
But I just think there might be a more elegant way...

fileSystemManager = new
org.apache.commons.vfs2.impl.StandardFileSystemManager();
fileSystemManager.setLogger(null);
try {
  fileSystemManager.init();
  fileSystemManager.setBaseFile(new File(""));
} catch (FileSystemException e) {
  e.printStackTrace();
}

Xeno Amess  于2020年1月19日周日 下午6:08写道:


I'm trying to migrate to commons-vfs2 now
severial things I found not quite right / amazing.

1.
   I tested version 2.6.0 and 2.5.0, and I just start at
VSF.getManager() (of cause I have no additional contfigure or
something)

It said class not
found:org.apache.commons.vfs2.provider.webdav.WebdavFileProvider

And I looked into your binary jars I get from maven central (2.6.0).

they really do not have that class WebdavFileProvider.
(even not found that package org.apache.commons.vfs2.provider.webdav)

And after I downgrade to 2.3 (I really wonder why 2.3 not 2.3.0 but
that is not important)
It can run now.(and never tell me class not found again)
I dont't want to try 2.4.0. Really bad connection here(I'm in a villige now).
All I get is:
2.6.0, broken.
2.5.0, broken.
2.3, fine.

According to the file on github, it said it might be deprecated, so I
wonder if you already deprecate d it and you just forgotten it?

   btw, according to your webpage https://commons.apache.org/proper/commons-vfs/
there even do not exist 2.6.0
But there be a 2.6.0 in maven central.
really make me confused.

2.
for using commons-vfs2 I downgrade slf4j from 2.0.0alpha to 1.7.30
We all know slf4j's author really do not care about backward
maintenance or something.
His codes are never able to migrate.
even though, will there be some plan about using reflect or something
to make vfs2 CAN suit slf4j 2.0?

3.
for some reason I need to deal with relative file path.
Is there any guide about using relative file path in vfs2?


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: some questions (/bug?) about commons-vfs2 make me confused.

2020-01-19 Thread Xeno Amess
yep that make sense.
but I'd rather add a class-check for provider class.
there already be a mechanism for making sure if all classes needed for
this provider class exist -> if not then just do not add the provider.
I will add a similar mechanism for making sure if the provider class
itself exist -> if not then just do not add the provider.
pull request here.
https://github.com/apache/commons-vfs/pull/78

Rob Spoor  于2020年1月20日周一 上午12:13写道:
>
> It seems that when the webdav support was moved to a separate artifact,
> the developers forgot to update file
> commons-vfs2/src/main/resources/org/apache/commons/vfs2/impl/providers.xml.
> This file is used by StandardFileSystemManager to load the default
> providers.
>
> I think this warrants a fix, to move the webdav provider from this
> default providers.xml file to file
> commons-vfs2-jackrabbit1/src/main/resources/META-INF/vfs-providers.xml,
> and create the same file with the correct providers for the
> commons-vfs2-jackrabbit2 module.
>
>
> On 19/01/2020 16:57, Xeno Amess wrote:
> > OK I get where is bugged.
> > I will fix it and add a test for that never happen again.
> >
> > Xeno Amess  于2020年1月19日周日 下午11:21写道:
> >>
> >> The key point is even if I do not wanna use it I must have this
> >> class,or VFS.getManager() can never run.
> >>
> >> IMO this type of class relationship cause the project where hold this
> >> class must be added into vfs's pom as a dependency, or just move class
> >> VFS into that project aswell.
> >>
> >> Otherwise we should not let the VFS.getManager() rely on this class.
> >>
> >> Thanks for finding this class though.
> >>
> >> btw I tested 2.4, 2.4 is correct.
> >>
> >> Rob Spoor  于2020年1月19日周日 下午10:00写道:
> >>>
> >>> The class was there in release 2.4.1:
> >>> https://github.com/apache/commons-vfs/blob/rel/commons-vfs-2.4.1/commons-vfs2/src/main/java/org/apache/commons/vfs2/provider/webdav/WebdavFileProvider.java.
> >>> In the next release, 2.5.0, it can indeed no longer be found. A bit of
> >>> investigating showed that the webdav classes got moved to a new
> >>> artifact:
> >>> https://github.com/apache/commons-vfs/commit/42ff473acbb5363b88f5ab3c5fddbae7b206c1d2
> >>>
> >>> That means you can still use it, you just need to include an extra
> >>> dependency:
> >>> https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit1/2.6.0
> >>>
> >>> There's apparently also a Jackrabbit 2 version available:
> >>> https://mvnrepository.com/artifact/org.apache.commons/commons-vfs2-jackrabbit2/2.6.0
> >>>
> >>>
> >>> On 19/01/2020 11:24, Xeno Amess wrote:
>  Right now I'm using something like
>  this to deal with relative files.
>  But I just think there might be a more elegant way...
> 
>  fileSystemManager = new
>  org.apache.commons.vfs2.impl.StandardFileSystemManager();
>  fileSystemManager.setLogger(null);
>  try {
>    fileSystemManager.init();
>    fileSystemManager.setBaseFile(new File(""));
>  } catch (FileSystemException e) {
>    e.printStackTrace();
>  }
> 
>  Xeno Amess  于2020年1月19日周日 下午6:08写道:
> >
> > I'm trying to migrate to commons-vfs2 now
> > severial things I found not quite right / amazing.
> >
> > 1.
> >I tested version 2.6.0 and 2.5.0, and I just start at
> > VSF.getManager() (of cause I have no additional contfigure or
> > something)
> >
> > It said class not
> > found:org.apache.commons.vfs2.provider.webdav.WebdavFileProvider
> >
> > And I looked into your binary jars I get from maven central (2.6.0).
> >
> > they really do not have that class WebdavFileProvider.
> > (even not found that package org.apache.commons.vfs2.provider.webdav)
> >
> > And after I downgrade to 2.3 (I really wonder why 2.3 not 2.3.0 but
> > that is not important)
> > It can run now.(and never tell me class not found again)
> > I dont't want to try 2.4.0. Really bad connection here(I'm in a villige 
> > now).
> > All I get is:
> > 2.6.0, broken.
> > 2.5.0, broken.
> > 2.3, fine.
> >
> > According to the file on github, it said it might be deprecated, so I
> > wonder if you already deprecate d it and you just forgotten it?
> >
> >btw, according to your webpage 
> > https://commons.apache.org/proper/commons-vfs/
> > there even do not exist 2.6.0
> > But there be a 2.6.0 in maven central.
> > really make me confused.
> >
> > 2.
> > for using commons-vfs2 I downgrade slf4j from 2.0.0alpha to 1.7.30
> > We all know slf4j's author really do not care about backward
> > maintenance or something.
> > His codes are never able to migrate.
> > even though, will there be some plan about using reflect or something
> > to make vfs2 CAN suit slf4j 2.0?
> >
> > 3.
> > for some reason I need to deal with relative file path.
> > Is there any guide about using relative file path in vfs

Re: [VOTE] Release Apache Commons CSV 1.8 based on RC1

2020-01-19 Thread sebb
What is the use case for needing serialisation?
It's a lot of effort to maintain a serialisable class, and it opens
the class to deserialisation attacks.

On Sun, 19 Jan 2020 at 12:39, Alex Herbert  wrote:
>
> Hi Gary,
>
> I raised a few niggles a while back with CSV and the discussion did not 
> receive a response on how to proceed.
>
> There is the major bug CSV-248 where the CSVRecord is not Serializable [1]. 
> This requires a decision on what to do to fix it. This bug is still present 
> in 1.8 RC1 as found by FindBugs [2].
>
> From what I can see the CSVRecord maintains a reference to the CSVParser. 
> This chain of objects maintained in memory is not serializable and leads back 
> to the original input Reader.
>
> I can see from the JApiCmp report that the serial version id was changed for 
> CSVRecord this release so there is still an intention to support 
> serialization. So this should be a blocker.
>
> I could not find a serialisation test in the unit tests for CSVRecord. This 
> quick test added to CSVRecordTest fails:
>
>
> @Test
> public void testSerialization() throws IOException {
> CSVRecord shortRec;
> try (final CSVParser parser = CSVParser.parse("a,b", 
> CSVFormat.newFormat(','))) {
> shortRec = parser.iterator().next();
> }
> final ByteArrayOutputStream out = new ByteArrayOutputStream();
> try (ObjectOutputStream oos = new ObjectOutputStream(out)) {
> oos.writeObject(shortRec);
> }
> }
>
> mvn test -Dtest=CSVRecordTest
>
> [ERROR] testSerialization  Time elapsed: 0.032 s  <<< ERROR!
> java.io.NotSerializableException: org.apache.commons.csv.CSVParser
> at 
> org.apache.commons.csv.CSVRecordTest.testSerialization(CSVRecordTest.java:235)
>
> If I mark the field csvParser as transient it passes. So this is a problem as 
> raised by FindBugs.
>
>
>
> I also raised [3] the strange implementation of the CSVParser 
> getHeaderNames() which ignores null headers as they cannot be used as a key 
> into the map. However the list of column names could contain the null values. 
> This test currently fails:
>
> @Test
> public void testHeaderNamesWithNull() throws IOException {
> final Reader in = new StringReader("header1,null,header3\n1,2,3\n4,5,6");
> final Iterator records = CSVFormat.DEFAULT.withHeader()
>  
> .withNullString("null")
>  
> .withAllowMissingColumnNames()
>  
> .parse(in).iterator();
> final CSVRecord record = records.next();
> assertEquals(Arrays.asList("header1", null, "header3"), 
> record.getParser().getHeaderNames());
> }
>
> I am not saying it should pass but at least the documentation should state 
> the behaviour in this edge case. That is the list of header names may be 
> shorter than the number of columns when the parser is configured to allow 
> null headers. I’ve not raised a bug ticket for this as it is open to opinion 
> if this is by design or actually a bug. This issue is still present in 1.8 
> RC1.
>
> Previously I suggested documentation changes for this and another edge case 
> using the header map to be added to the javadoc for getHeaderNames() and 
> getHeaderMap():
>
> - Documentation:
>
> The mapping is only guaranteed to be a one-to-one mapping if the record was 
> created with a format that does not allow duplicate or null header names. 
> Null headers are excluded from the map and duplicates can only map to 1 
> column.
>
>
> - Bug / Documentation
>
> The CSVParser only stores headers names in a list of header names if they are 
> not null. So the list can be shorter than the number of columns if you use a 
> format that allows empty headers and contains null column names.
>
>
> The ultimate result is that we should document that the purpose of the header 
> names is to provide a list of non-null header names in the order they occur 
> in the header and thus represent keys that can be used in the header map. In 
> certain circumstances there may be more columns in the data than there are 
> header names.
>
>
> Alex
>
>
> [1] https://issues.apache.org/jira/browse/CSV-248 
> 
>
> [2] 
> https://dist.apache.org/repos/dist/dev/commons/csv/1.8-RC1/site/findbugs.html 
> 
>
> [3] https://markmail.org/message/woti2iymecosihx6 
> 
>
>
>
> > On 18 Jan 2020, at 17:52, Gary Gregory  wrote:
> >
> > We have fixed quite a few bugs and added some significant enhancements
> > since Apache Commons CSV 1.7 was released, so I would like to release
> > Apache Commons CSV 1.8.
> >
> > Apache Commons CSV 1.8 RC1 is available for review here:
> >https://dist.apache.org/repos/dist/dev/commons/csv/1.8-RC1 (svn
> > revision 37670)
> >
> > The Git tag commons-csv-1.8-RC1 commit 

[VOTE][CANCEL] Release Apache Commons CSV 1.8 based on RC1

2020-01-19 Thread Gary Gregory
I am canceling this RC so we can deal with these issues.

Gary


On Sun, Jan 19, 2020 at 7:39 AM Alex Herbert 
wrote:

> Hi Gary,
>
> I raised a few niggles a while back with CSV and the discussion did not
> receive a response on how to proceed.
>
> There is the major bug CSV-248 where the CSVRecord is not Serializable
> [1]. This requires a decision on what to do to fix it. This bug is still
> present in 1.8 RC1 as found by FindBugs [2].
>
> From what I can see the CSVRecord maintains a reference to the CSVParser.
> This chain of objects maintained in memory is not serializable and leads
> back to the original input Reader.
>
> I can see from the JApiCmp report that the serial version id was changed
> for CSVRecord this release so there is still an intention to support
> serialization. So this should be a blocker.
>
> I could not find a serialisation test in the unit tests for CSVRecord.
> This quick test added to CSVRecordTest fails:
>
>
> @Test
> public void testSerialization() throws IOException {
> CSVRecord shortRec;
> try (final CSVParser parser = CSVParser.parse("a,b",
> CSVFormat.newFormat(','))) {
> shortRec = parser.iterator().next();
> }
> final ByteArrayOutputStream out = new ByteArrayOutputStream();
> try (ObjectOutputStream oos = new ObjectOutputStream(out)) {
> oos.writeObject(shortRec);
> }
> }
>
> mvn test -Dtest=CSVRecordTest
>
> [ERROR] testSerialization  Time elapsed: 0.032 s  <<< ERROR!
> java.io.NotSerializableException: org.apache.commons.csv.CSVParser
> at
> org.apache.commons.csv.CSVRecordTest.testSerialization(CSVRecordTest.java:235)
>
> If I mark the field csvParser as transient it passes. So this is a problem
> as raised by FindBugs.
>
>
>
> I also raised [3] the strange implementation of the CSVParser
> getHeaderNames() which ignores null headers as they cannot be used as a key
> into the map. However the list of column names could contain the null
> values. This test currently fails:
>
> @Test
> public void testHeaderNamesWithNull() throws IOException {
> final Reader in = new
> StringReader("header1,null,header3\n1,2,3\n4,5,6");
> final Iterator records = CSVFormat.DEFAULT.withHeader()
>
>  .withNullString("null")
>
>  .withAllowMissingColumnNames()
>
>  .parse(in).iterator();
> final CSVRecord record = records.next();
> assertEquals(Arrays.asList("header1", null, "header3"),
> record.getParser().getHeaderNames());
> }
>
> I am not saying it should pass but at least the documentation should state
> the behaviour in this edge case. That is the list of header names may be
> shorter than the number of columns when the parser is configured to allow
> null headers. I’ve not raised a bug ticket for this as it is open to
> opinion if this is by design or actually a bug. This issue is still present
> in 1.8 RC1.
>
> Previously I suggested documentation changes for this and another edge
> case using the header map to be added to the javadoc for getHeaderNames()
> and getHeaderMap():
>
> - Documentation:
>
> The mapping is only guaranteed to be a one-to-one mapping if the record
> was created with a format that does not allow duplicate or null header
> names. Null headers are excluded from the map and duplicates can only map
> to 1 column.
>
>
> - Bug / Documentation
>
> The CSVParser only stores headers names in a list of header names if they
> are not null. So the list can be shorter than the number of columns if you
> use a format that allows empty headers and contains null column names.
>
>
> The ultimate result is that we should document that the purpose of the
> header names is to provide a list of non-null header names in the order
> they occur in the header and thus represent keys that can be used in the
> header map. In certain circumstances there may be more columns in the data
> than there are header names.
>
>
> Alex
>
>
> [1] https://issues.apache.org/jira/browse/CSV-248 <
> https://issues.apache.org/jira/browse/CSV-248>
>
> [2]
> https://dist.apache.org/repos/dist/dev/commons/csv/1.8-RC1/site/findbugs.html
> <
> https://dist.apache.org/repos/dist/dev/commons/csv/1.8-RC1/site/findbugs.html
> >
>
> [3] https://markmail.org/message/woti2iymecosihx6 <
> https://markmail.org/message/woti2iymecosihx6>
>
>
>
> > On 18 Jan 2020, at 17:52, Gary Gregory  wrote:
> >
> > We have fixed quite a few bugs and added some significant enhancements
> > since Apache Commons CSV 1.7 was released, so I would like to release
> > Apache Commons CSV 1.8.
> >
> > Apache Commons CSV 1.8 RC1 is available for review here:
> >https://dist.apache.org/repos/dist/dev/commons/csv/1.8-RC1 (svn
> > revision 37670)
> >
> > The Git tag commons-csv-1.8-RC1 commit for this RC is
> > c1c8b32809df295423fc897eae0e8b22bfadfe27 which you can browse here:
> >
> >
> https://gitbox.apache.org/repos/asf?p=commons-csv.git;a=commit;h=c1c8b32809df295423fc897eae0e8b22bfadfe27
> > You may checkout this tag using:
> >git clone https://gitbox.apache.org/re

Re: [VOTE] Release Apache Commons CSV 1.8 based on RC1

2020-01-19 Thread Alex Herbert

> On 20 Jan 2020, at 00:54, sebb  wrote:
> 
> What is the use case for needing serialisation?
> It's a lot of effort to maintain a serialisable class, and it opens
> the class to deserialisation attacks.

I don’t have a use case. But the class used to support serialization back to 
the code tagged as CSV_1.0. Putting out new releases that do not support it is 
breaking binary compatibility.

1.7 was the first to break compatibility. The live site reports it as such [1]. 

I will state that I voted +1 on release 1.7. Somehow the issue was missed then 
and it has bugged me ever since.


[1] https://commons.apache.org/proper/commons-csv/findbugs.html 





[math]New feature MiniBatchKMeansClusterer

2020-01-19 Thread CT
Hi,  In my picture search project, I need a cluster algorithm to narrow 
the dataset, for accelerate the search on millions of pictures.
  First we use python+pytorch+kmean, with the growing data from thousands 
to millions, the KMeans clustering became slower and slower(seconds to 
minutes), then we find MiniBatchKMeans could amazing finish the clustering in 
1~2 seconds on millions of data.
  Meanwhile we still faced the insufficient concurrent capacity of python, 
so we switch to kotlin on jvm.
  But there did not a MinibatchKMeans algorithm in jvm yet, so I wrote one 
in kotlin, refer to the (python)sklearn MinibatchKMeans and Apache Commons 
Math(Deeplearning4j was also considered, but it is too slow because of ND4j's 
design).


  I'd like to contribute it to Apache Commons Math, and I wrote a java 
version: https://github.com/chentao106/commons-math/tree/feature-MiniBatchKMeans


  From my test(Kotlin version), it is very fast, but gives slightly 
different results with KMeans++ in most case, but sometimes has big 
different(May be affected by the randomness of the mini batch):




Some bad case:

It even worse when I use RandomSource.create(RandomSource.MT_64, 0) for 
the random generator ┐(´-`)┌.


My brief understanding of MiniBatchKMeans:
Use a partial points in initialize cluster centers, and random mini batch in 
training iterations.
It can finish in few seconds when clustering millions of data, and has few 
differences between KMeans.


More information about MiniBatchKMeans
  https://www.eecs.tufts.edu/~dsculley/papers/fastkmeans.pdf
  
https://scikit-learn.org/stable/modules/generated/sklearn.cluster.MiniBatchKMeans.html