Sunburned Surveyor wrote:
Hi,
> I wonder if Deegree or Kosmo is using a streaming Shapefile parser?
deegree has two different shape file readers, and IIRC none of those
can do streaming. It should not be very difficult, though, to adapt
one of them to use streaming.
However, SHP is a very simpl
Martin,
If you could provide the code to Paul and I it would be greatly
appreciated. I will post it on the SurveyOS Sourceforge SVN
Repository, unless someone thinks it is appropriate to host on the JPP
Repository.
Thank you very much for your help.
The Sunburned Surveyor
On 8/30/07, Martin Dav
Larry Becker wrote:
> I guess I hadn't actually realized it wasn't until now. Do you
> remember what happened to the code, or why you didn't stay with the
> streaming version? A very large shape file seems like a more likely
> scenario that I actually care about.
>
> Larry
>
>
Yes, the co
That would be totally awesome!
SS
On 8/30/07, Paul Austin <[EMAIL PROTECTED]> wrote:
> Landon,
>
> I have one that is based on an iterator approach to reading, I could
> adapt it to work with the new DataObject implementation
>
> Paul
>
> --
Landon,
I have one that is based on an iterator approach to reading, I could
adapt it to work with the new DataObject implementation
Paul
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to f
Yup. GeoTools is a nightmare to use and understand. It makes me wonder
how they ever built something as functional as UDig on top of it.
However, the alternative was to write a new Shapefile parser from
scratch, and that seemed like an awful lot of work. I have no doubt I
could do it, but it is pr
Yes, I have looked at the various newer versions of the GeoTools
Shapefile readers. They seemed to be too complicated to be a drop in
replacement for our older GeoTools version (which also has the
capability to randomly read, but that is useless in OJ's
implementation).
Larry
On 8/30/07, Sunbur
Larry,
I don't know if it is of interest to you, but GeoTools has a streaming
ESRI Shapefile parser. They actually have the ability to randomly
access an indexed Shapefile. I plan on using their Shapefile code in
my FeatureCache. The idea is to use the GeoTools code to access one
Feature at a time
>At one point I actually rewrote the Shapefile parser to be streaming as well...
I guess I hadn't actually realized it wasn't until now. Do you
remember what happened to the code, or why you didn't stay with the
streaming version? A very large shape file seems like a more likely
scenario that I
Hi,
I would prefer not to have a need for one data exchange format for small
datasets, and some other for big ones :)
I suppose that it is just this high memory consumption during parsing that is
limiting the size of WFS request in OpenJUMP. The memory is freed once the
parsing is done, but t
This would be especially important if you had multiple
FeatureCollections stored in one file, and you wanted to load only one
of them.
Sunburned Surveyor wrote:
> Paul is correct. The pull parser does not reduce the memory of the
> parsing results, but it does reduce the memory used during the p
I agree with both Paul and Larry!
Another good reason for having a pull parser (or in general a streaming
parser) is that it increases it's reusability for clients which can
operate in a streaming fashion. This isn't OJ (at least currently), but
I have other applications for which this was a n
Roger that Larry. I was just mentioning a possible scenario in which a
pull parser might be an advantage over a SAX/DOM parser.
SS
On 8/30/07, Larry Becker <[EMAIL PROTECTED]> wrote:
> True, if you have the case of one very large GML layer for your whole
> map, but this is far from normal GIS.
>
True, if you have the case of one very large GML layer for your whole
map, but this is far from normal GIS.
Larry
On 8/30/07, Sunburned Surveyor <[EMAIL PROTECTED]> wrote:
> Paul is correct. The pull parser does not reduce the memory of the
> parsing results, but it does reduce the memory used du
Paul is correct. The pull parser does not reduce the memory of the
parsing results, but it does reduce the memory used during the parsing
process. That is because an in-memory representation of the entire XML
document is not constructed.
One advantage of this is using the parser to select only dat
Hi Larry,
You are correct that the resulting data set will take up a lot of memory
at the end, the advantage with the pull parser is that you don't take up
a whole bunch of extra memory for the XML DOM structures which typically
get loaded into memory for the whole document. So with the pull parse
It isn't the parser that takes up the memory except temporarily), but
the memory resident dataset after loading. This will still limit the
size.
Larry
On 8/30/07, Sunburned Surveyor <[EMAIL PROTECTED]> wrote:
> Yup. It makes you wonder why they didn't use pull parsers from the
> very beginning,
Yup. It makes you wonder why they didn't use pull parsers from the
very beginning, doesn't it.
SS
On 8/30/07, Paul Austin <[EMAIL PROTECTED]> wrote:
> Agreed the pull parser is the only way to go for large XML files
>
> Paul
>
> Sunburned Surveyor wrote:
> > Martin,
> >
> > If we decide to suppor
Agreed the pull parser is the only way to go for large XML files
Paul
Sunburned Surveyor wrote:
> Martin,
>
> If we decide to support a restricted form of GML 2 we could build our
> reader and writer on top of the XML Pull Parser from Sun. This would
> help us to avoid memory problems when readin
Martin,
If we decide to support a restricted form of GML 2 we could build our
reader and writer on top of the XML Pull Parser from Sun. This would
help us to avoid memory problems when reading in large files.
https://sjsxp.dev.java.net/
Just a thought.
The Sunburned Surveyor
--
20 matches
Mail list logo