Hi Roger,

Wow, great feedback, thank you!  Comments/answers in-line...

On Sat, Sep 10, 2016 at 2:45 AM, Roger Vandusen <Roger.VanDusen@ticketmaster
.com> wrote:

> Hi John, thanks for your quick and detailed offerings below.
>
> First, the funny, in the last few weeks I have actually been playing with
> your github repo's and particularly your contacts repo's predecessor, the
> 'actionable' spring gemfire repo.
>

Nice! Yeah, I decided I wanted to pull out the relevant parts and start
building the Contacts Application, mainly as a 1) Reference Implementation
(RI) for all things *Spring* and GemFire/Geode related (including other
tech), and 2) so I wouldn't have to rebuild examples for all my talks,
every time, :).  I will keep the app up-to-date with the latest
developments and preferred ways of doing things in all the tech I use.

This is just the beginning, but my caching-example, for instance,
mashes up *Goggle's
Map API, Geocoding* service to resolve Contact Addresses by
Latitude/Longitude (along with the reverse) that could be displayed on a
Google Map in my eventual UI.  Should be fun; anyway as you probably can
tell, I pretty excited about it. Thank you for having a look at the
actionable-spring-gemfire repo.


> We've been evaluating IMDG's: Ignite (v1.5), Hazelcast (v3.6.3) and
> Gemfire/Geode for a few months now.
> We are looking for a very simple, straight forward IMDG solution to just
> simply read/write data to the a remote distributed durable IN-MEM DATA
> GRID, my team's service instances are the only client.
> Most of our regions will be PARTITIONED some lookup tables will
> REPLICATED, all will be PERSISTENT and store PdxInstances.
>

Makes sense.


> Of the three, PIVOTAL Gemfire/Geode was our initial selection after our
> first evaluations, partly because we are a Spring shop already.
> We've used Hazelcast before and we are familiar with some it's limitations
> (open-source ver) and proprietary quirks and we initially found Ignite a
> little stubborn to config at first and a bit heavy-handed as a full-blown
> 'fabric'.
>

Right, I looked at Ignite's *Spring* integration and was like, o.O, more
than a few times.  Hazelcast and, of course, GemFire/Geode's *Spring*
integration is far more robust/complete.

Naturally, GemFire/Geode has the best integration story with the *Spring*
ecosystem, but I give Hazelcast a lot of credit.  They have stepped up
their *Spring* integration efforts (which includes Repos now) and are
actually a fully compliant JCache (JSR-107) provider, with or without
*Spring*. GemFire/Geode can only be used with JSR-107, JCache annotations
when *Spring* (*Data GemFire*) is on the classpath, but GemFire/Geode is
not a true JSR-107 compliant caching provider (largely because it does not
implement the SPI, unlike Hazelcast).

For a quick overview of where GemFire (and Geode in certain cases) can be
found in the *Spring* ecosystem, have a look at this presentation
<http://www.slideshare.net/john_blum/spring-data-gemfire-overview> [1]
(Slides 15-24).  Actually, the whole deck may help answer a few more of
your questions that we'll get to below.

[1] http://www.slideshare.net/john_blum/spring-data-gemfire-overview


> We've been working with Geode and Spring-Data-Gemfire/Geode, client and
> server configurations, for the last 3 months.
> But here is the sticking point at present with Geode: the maturity,
> stability and it's presently confusing identity and roadmap timetable, this
> would include your 'babies' as well, spring-data-gemfire/geode.
>

Understandable.

You probably know Apache Geode was based on Pivotal GemFire's 8.2's codebase,
released to ASF last April as a newly minted, incubating OSS, ASF project.
Since it has the same core as GemFire, it is largely built on the same
foundation (concepts, capabilities, etc).  GemFire has been around since
2002 (long before Pivotal existed and SpringSource/VMW acquired GemStone).
GemFire has been used at major organizations (financial, transportation,
etc), so is a really mature product in some demanding environments.

However, the GemFire/Geode codebases have forked (significantly) and they
both have different roadmaps.

Largely, a majority of the effort as of late is focused on Geode, all to
get Geode to 1.0 GA, and to have it graduate as a Top-Level Project (TLP)
in ASF. Still, don't let the version fool you; it does not reflect the
project's maturity.

A fair amount of new (some experimental) work happening in Geode within the
community (e.g. Off-Heap Memory support, Lucene Integration, OQL query
aggregates and so on).

The Pivotal engineers working on Geode are also responsible for GemFire.
The plan is to rebase Pivotal GemFire 9.0 on the Apache Geode codebase so
that they are 1 in the same.  This will be a huge effort and I do not have
details on the timeline.

Concerning *Spring Data GemFire/Geode*...

In short, *Spring Data Geode* is largely tied to Apache Geode's release
cycle and in no way is predictable (for the time being).  My major efforts
going forward will be to focus on incorporating some of the proposed
(experimental) features in Geode to be consumed in a *Spring* context (in a
*Spring* way) with SD Geode.

*Spring Data GemFire* on the other hand is actually part of what we
call Release
Train(s) <https://github.com/spring-projects/spring-data-commons/wiki> [2]
(see right hand column/nav links) on the *Spring Data Team*. This
progression of a Release Train occurs at incremental and (fairly)
predictable/consistent schedules, for several reasons...

[2] https://github.com/spring-projects/spring-data-commons/wiki

1. SD GemFire is not solely tied to Pivotal GemFire, but is in fact
integrated and used throughout the larger *Spring* ecosystem (again Slides
15-24 in the presentation noted above), and so must evolve with the
core *Spring
Framework* (e.g. v4.0 -> v5.0), *Spring Boot*, and other *Spring* projects
for which it takes part, with the major themes going on across the
portfolio (e.g. the big theme for SF 5 is Reactive support and SD modules
are taking the Reactive story to heart, which will be SD 2.0, so like SD
GemFire 2.0 (up next after, I suspect SDG 1.9)).

2. SD GemFire is part of a group of SD modules that participate in the
Release Train, for instance (Hopper
<https://github.com/spring-projects/spring-data-commons/wiki/Release-Train-Hopper#participating-modules>
[3],
current, and Ingalls
<https://github.com/spring-projects/spring-data-commons/wiki/Release-Train-Ingalls>
[4],
currently in development, i.e. pre GA).  Also pay attention to the "*core
themes*" of a particular Release Train.

Additionally, you can always look at the *Spring Data GemFire* project page
<http://projects.spring.io/spring-data-gemfire/> [5] for currently
supported versions.  See the Spring Data project page
<http://projects.spring.io/spring-data/> [6] for a more complete picture
(non-store specific).  The Release Train is largely responsible for the
release cadence and it is expected that all store leads move their
independent stores in harmony with the train so our users are be able to
use any store in a particular Release Train without fussing with
dependencies versions, whether that be *Spring* versions or 3d party
dependency versions).

[3] https://github
.com/spring-projects/spring-data-commons/wiki/Release-Train-Hopper#participating-modules
[4] https://github
.com/spring-projects/spring-data-commons/wiki/Release-Train-Ingalls
[5] http://projects.spring.io/spring-data-gemfire/
[6] http://projects.spring.io/spring-data/

3. Finally, like #1 above, the inverse is that SD GemFire (and SD Geode)
builds (extends) on the *Spring Framework* programming model and concepts
(e.g. Transaction Management, Caching, etc) along with Spring Data Commons,
and so much incorporate the latest enhancements from externally rooted
features into SDG, to the benefit of *Spring* users using GemFire/Geode.

Sorry for the long winded answer, but it is a really complex relationship
to manage/coordinate.


> The documentation is confusing, disjointed and *misleading* as you even
> asserted at the beginning of your reply.
> And it was noted as well in last month's user email thread you contributed
> on as well (Re: Persistence and OQL over cold data).
>

Right


>
> I can also sight two simple examples from your own references in the reply
> you've provided me, listed at the bottom:
> 1)  [4] http://gemfire.docs.pivotal.io/docs-gemfire/
> latest/developing/region_options/data_hosts_and_accessors.html states:
> "To configure a region so the member is a data accessor, you use
> *configurations* that specify no local data storage for the region."
>
> But nowhere on this page does it specifically identify what those "
> *configurations*" you should use are or a link to them.
>

Yes, very unfortunate.  I have voiced my concerns on this many times, about
not taking anything for granted; i.e. don't assume our users know what we
do.

FYI... to create a Data Accessor (peer Region on a member in the cluster)
set
<http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/RegionFactory.html#setDataPolicy(com.gemstone.gemfire.cache.DataPolicy)>
[7]
the DataPolicy to EMPTY
<http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/DataPolicy.html#EMPTY>
[8].
Alternatively, you can pass the RegionShortcut.PARTITION_PROXY
<http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/RegionShortcut.html#PARTITION_PROXY>
[9]
(or RegionShortcut.REPLICATE_PROXY
<http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/RegionShortcut.html#REPLICATE_PROXY>
[10])
to the cache.createRegionFactory(:RegionShortcut)
<http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/cache/Cache.html#createRegionFactory(com.gemstone.gemfire.cache.RegionShortcut)>
[11]
method (again this is only server-side in a peer/data member)

[7] http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi
/com/gemstone/gemfire/cache/RegionFactory.html#setDataPolicy(com.gemstone.
gemfire.cache.DataPolicy)
[8] http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi
/com/gemstone/gemfire/cache/DataPolicy.html#EMPTY
[9] http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi
/com/gemstone/gemfire/cache/RegionShortcut.html#PARTITION_PROXY
[10] http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi
/com/gemstone/gemfire/cache/RegionShortcut.html#REPLICATE_PROXY
[11] http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi
/com/gemstone/gemfire/cache/Cache.html#createRegionFactory(com.gemstone.
gemfire.cache.RegionShortcut)


> 2)  [2] https://github.com/apache/incubator-geode/blob/
> rel/v1.0.0-incubating.M3/geode-core/src/main/java/com/
> gemstone/gemfire/internal/cache/GemFireCacheImpl.java#L4953-L4960
> The code referenced in the link (see below) uses *AttributesFactory*
> which is now DEPRECATED in favor of createClientRegionFactory(
> ClientRegionShortcut.PROXY);
> And this is from an INTERNAL implementation class in the latest geode
> release version .M3!!!
> case PROXY: {
> *AttributesFactory* af = new *AttributesFactory*();
> af.setDataPolicy(DataPolicy.EMPTY);
> UserSpecifiedRegionAttributes ra = (UserSpecifiedRegionAttributes) af.
> create();
> ra.requiresPoolName = true;
> c.setRegionAttributes(pra.toString(), ra);
> break;
> }
>
>
>
Yes sir, one of the many problems of keeping the documentation up-to-date.
See my examples directly above.  They use the RegionFactory on the server,
and ClientRegionFactory on the client.



> John you did address the problem I originally was asking clarification on:
> the confusing nature of what Geode/Gemfire considers a PROXY and how it
> behaves, different from using any local cache copy/clone. We do not want
> cloned data in our client application jvm memory, we want a performant
> remote distributed IMDG cluster to always be the canonical source, no
> eventqueues and eventlisteners required. I can now see that even a client
> PROXY region is still a LOCAL EMPTY region instance fronting the server
> backed region data and not acting as a pure PROXY passthrough API.
>

Correct.


>
> The simple use case confusion below, where client proxy region calls to
> region.size() and region.values() can not return the server region
> responses just seems odd, and different than other API's we've used.
>
Strange that when we make these calls on a client PROXY API that it doesn't
> have a pure PROXY/passthrough implementation to return the server region
> size or values, knowing it's configured to have no local data itself.
>

Understandable, and I would have to agree.


>
> So John, since you responded, let me specifically extend this inquiry
> topic into the realm of your spring-data-gemfire/geode.
> First, sorry I haven't seen your 'geode clubhouse' presentation yet, got
> it bookmarked for reading, and FYI I've gone a round or two with it
> already, seemed friendly, easier to use and config.
>

No worries and thank you for the kind words.


> But I'm re-thinking now that I may not have had it configured for PROXY
> when I tested it a couple months ago and I think now my tests/timings may
> have been misleading as the gets were probably being served from local
> region in-memory and not all the way back to the server nodes. Makes me
> wonder too whether the behavior applies even to PARTITIONED server/data
> regions.
>

No worries; I constantly experiment, making quick changes, trying to recall
things I did much earlier.

With PARTITION *Regions*, they act not only locally, but as a "logical"
*Region* as well.  This means if a request (get(key)) is made on a server
with a PARTITION *Region* and the key/value mapping is not present, it will
"hop" to the node that has the value and return it, providing the key
exists.

This is the main reason why single-hop is so important from the client.

A client could be configured for, or talking to 1 server in particular (via
a PROXY *Region*) to a PARTITION *Region* on the server.  If the client
requests a key that does not exist in that particular server, it has to
make an extra hop to the server/data node with the same PARTITION *Region*
storing the key/value.  With single-hop enabled, it avoids this extra hop
and goes directly to the server/data node with the key/value pair it
requested (the client achieves this with meta-data sent down from the
Locator about the location of keys on the servers in the cluster).


> John, SPRING-DATA-G Q's:
>
> What do you feel is the maturity of spring-data-g at this time?
>

Very mature; SDG has been an active *Spring* project since 2010.  After
becoming the project lead in 2013, I have made significant progress to move
the project forward.  I have fixed countless bugs (here is an incomplete
list
<https://jira.spring.io/browse/SGF-525?jql=project%20%3D%20SGF%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(jblum)>
[12])
and restored people's confidence in the project, which was really waivering
for a quite some time given the quality issues and behavioral disconnect
when using SDG with GemFire.

It was a big effort and one that I continue to stay committed to going
forward.  It is my pleasure continue to do so and I am very proud of the
work to-date.  But honestly, there is much work to do with it yet.  Not so
much bug fixing or aligning it with GemFire (anything like that), but to
really simplify the OOTB experience getting GemFire up and running as
quickly, easily and reliably as possible.  That is my major theme for SDG
in 1.9 with the new Annotation configuration model I am currently working
on is my answer.

I also really need to clean up and beef up the documentation/examples.  I
want to blog more particularly on the getting started, from the beginning,
using the new Annotation config model and building up to the more advanced
use cases and customer scenarios I have had the privilege to be purview to
(this is part of the reason I am working on the Contacts Application RI, to
showcase some of this stuff in action; nothing like... "seeing is
believing").

[12] https://jira.spring.io/browse/SGF-525?jql
=project%20%3D%20SGF%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(
jblum)

Roadmap milestones ahead?
>

SDG has been an OSS project from day 1, so all the activity is always
visible, to anyone, in SDG's JIRA project
<https://jira.spring.io/browse/SGF/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel>
[13],
where anyone can influence and shape it.  High-level Roadmap here
<https://jira.spring.io/browse/SGF/?selectedTab=com.atlassian.jira.jira-projects-plugin:roadmap-panel>
[14].
Current open tickets/ideas here
<https://jira.spring.io/issues/?jql=project%20%3D%20SGF%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Waiting%20for%20Feedback%22%2C%20Investigating%2C%20%22Waiting%20for%20Review%22)>
 [15].

Having said this, I always welcome/appreciate any feedback from the
community, always, either by way of a JIRA ticket or a PR
<https://github.com/spring-projects/spring-data-gemfire/pulls> [16].
Feedback/help is always welcomed and appreciated.

[13] https://jira.spring.io/browse/SGF/?selectedTab=com.atlassian.jira.jira
-projects-plugin:summary-panel
[14] https://jira.spring.io/browse/SGF/?selectedTab=com.atlassian.jira.jira
-projects-plugin:roadmap-panel
[15] https://jira.spring.io/issues/?jql
=project%20%3D%20SGF%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Waiting%20for%20Feedback%22%2C%20Investigating%2C%20%22Waiting%20for%20Review%22)
[16] https://github.com/spring-projects/spring-data-gemfire/pulls



> Is the use of gemfireTemplate an improvement on any of this behavior or
> will it behave the same?
>

What GemFireTemplate gives you is...

1. Simple, convenient and familiar Data Access API (CRUD, Querying)
consistent with the *Spring Framework's* use of the Template pattern (e.g.
JdbcTemplate using callbacks, etc).

2. Protects developers from changes in the underlying GemFire/Geode API.

3. Exception Translation of GemFire/Geode Exceptions into the *Spring* DAO
Exception Hierarchy
<http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#dao-exceptions>
 [17].

4. Integration with *Spring's* Transaction Management Infrastructure so
data access operations are appropriately wrapped and handled in
transactions when TX boundaries have been crossed (as demarcated in your
application (service) components).

I covered this a few times in some of my past presentations, here is 1 in
particular
<http://www.slideshare.net/john_blum/building-effective-apache-geode-applications-with-spring-data-gemfire>
[18]
(Slide 21).  However, the GemfireTemplate, does not specifically handle the
nuances of a *Region's* DataPolicy, or things of that nature (as discussed
above) since the template use's and wraps
<https://github.com/spring-projects/spring-data-gemfire/blob/1.8.2.RELEASE/src/main/java/org/springframework/data/gemfire/GemfireTemplate.java#L75>
[19]
the *Region* API to carry out it's function.

[17] http://docs.spring.io/spring/docs/current/spring-framework-reference/
htmlsingle/#dao-exceptions
[18] http://www.slideshare.net/john_blum/building-effective-apache
-geode-applications-with-spring-data-gemfire
[19] https://github.com/spring-projects/spring-data-gemfire
/blob/1.8.2.RELEASE/src/main/java/org/springframework/data/gemfire/
GemfireTemplate.java#L75


What do you feel are the advantages of wrapping the client with a
> spring-data wrapper?
>

There are several, really powerful abstractions and good reasons to use SD
to wrap, well, any store, really...

1. First and foremost, the SDC's *Repository* abstraction
<http://docs.spring.io/spring-data/commons/docs/current/reference/html/#repositories>
[20]
and individual store (e.g. SDG
<http://docs.spring.io/spring-data-gemfire/docs/current/reference/html/#gemfire-repositories>
[21])
support.

How cool is it to define a Java interface and have a Data Access Object
generated for you, that is store specific, and be able to perform CRUD,
Querying, Functional/Procedural executions (for some stores), Paging and
Sorting (some stores), Projections (again, some stores), custom handling
and easy extensibility where needed.  This by itself is a huge time saver.
Also, see some recent work
<https://github.com/spring-projects/spring-boot/pull/6224> [24] I did
in *Spring
Boot* to simplify configuration of SDG Repositories.

2. (SDG Specific) the Function execution/implementation Annotation support
<http://docs.spring.io/spring-data-gemfire/docs/current/reference/html/#function-annotations>
 [22].

3. Transaction Management
<http://docs.spring.io/spring-data-gemfire/docs/current/reference/html/#apis:tx-mgmt>
[23],
whether that be Local (cache) or Global (JTA) transactions (involving
multiple data sources).

My repository-example
<https://github.com/jxblum/actionable-spring-gemfire/tree/master/repository-example>
[23]
demonstrates both, among other things, like querying a collocated, PARTITION
*Region *(to find all "customers" with "contact" information in order to
generated leads), which must be done inside a GemFire Function, so I take
advantage of also show casing the SDG Function Annotation support used in a
Repository extension (pretty slick).

4. Service component operation Caching
<http://docs.spring.io/spring-data-gemfire/docs/current/reference/html/#apis:spring-cache-abstraction>
[25] using
*Spring's* Cache Abstraction along with SDG to position GemFire/Geode as a
caching provider, even using the client/server topology.

5. I have implemented an adapter for *Spring Session*
<http://docs.spring.io/spring-session/docs/1.2.2.RELEASE/reference/html5/#httpsession-gemfire>
[26]
to use GemFire/Geode as a distributed/clustered HttpSession caching
solution.

There are so many ways that *Spring* and SDG can be used on the client and
I would hope that users always consider SDG to simplify their application
interactions with GemFire or Geode, especially *Spring* users.

[20] http://docs.spring.io
/spring-data/commons/docs/current/reference/html/#repositories
[21] http://docs.spring.io/spring-data-gemfire/docs/current/reference/html/#
gemfire-repositories
[22] http://docs.spring.io/spring-data-gemfire
/docs/current/reference/html/#function-annotations
[23] https://github.com/jxblum/actionable-spring-gemfire
/tree/master/repository-example
[24] https://github.com/spring-projects/spring-boot/pull/6224
[25] http://docs.spring.io/spring-data-gemfire
/docs/current/reference/html/#apis:spring-cache-abstraction
[26] http://docs.spring.io
/spring-session/docs/1.2.2.RELEASE/reference/html5/#httpsession-gemfire


> One issue we experience often using spring wrapper projects, esp. on
> highly 'pivotal' projects and techs (geode/kafka) is that it grows our tech
> stack deeper to add spring wrapper projects and makes it harder to update
> and then there are the version/feature synchronization issues between core
> tech and wrapper project which it adds, with bugs at both levels. But these
> are generic 'Spring' project problems.
>

One way to help alleviate this problem is to use (a particular version
of) *Spring
Boot*.  It always curates and includes a list of harmonized, well-known
dependencies
<https://github.com/spring-projects/spring-boot/blob/v1.4.0.RELEASE/spring-boot-dependencies/pom.xml#L45-L184>
[28]
(*Spring* and 3rd party dependencies alike, used by *Spring* projects) that
have been exhaustively tested to work together seamlessly. This effort
builds on the Spring IO Platform <http://platform.spring.io/platform/> [27],
which was created to solve this very problem.

[27] http://platform.spring.io/platform/
[28] https://github
.com/spring-projects/spring-boot/blob/v1.4.0.RELEASE/spring-boot-dependencies/pom.
xml#L45-L184

Can using SDG gemfireTemplate offer significant enough advantages over core
> geode api?
>

I think so.  But, I would even recommend/encourage you to consider
higher-level abstractions like *Repositories*.  The SDG Repository
infrastructure extension even leverages the GemFireTemplate under-the-hood.

In general...

*Spring* always gives you the flexibility to change underlying
providers/implementations for all aspects/concerns of your application,
from data access/stores to application runtimes (e.g. Tomcat, Jetty, Netty,
even Java EE Servers for that matter) without unduly coupling you to the
underlying platforms/APIs.

I have always preferred to have and use an abstraction layer in my
enterprise application experience/design (before my journey with VMW/Pivotal),
and I think with the advent of Microservices and Cloud (Native) Computing,
that this will only become even more important... i.e. having a framework
to consume or swap out services in a consistent way (with the same facade)
while minimizing the impact (changes) to the applications using those
services (think of leveraging Netflix's experience with their APIs, but
then to seamlessly consume similar services in AWS without a single change).



> John, BTW, one SDG issue we had was wanting to use your latest SDG
> springbootapplication annotatons:
> @ClientCacheApplication
> @CacheServerApplication
> @EnableCacheServers
> @EnableLocator
> We would ideally want to be able to dynamically config these annotation's
> attributes, but, as is, it is not possible. Ideas?
>

Yep, that is something I plan to/will address.  In certain places, you can
already use *Spring* "property placeholders", for the String-based
attributes, of course, some of them, not all them.

In short, currently, it largely depends on whether the Annotation attribute
translates into a "property" on some *Spring* bean that gets
registered/declared (by the SDG framework for you when using the
Annotations) and processed by the *Spring* *ApplicationContext*.  It is
tricky to explain, which is why some attributes work, and some do not.

But effectively, I plan to make all attributes Strings so they can be
externally configured using *Spring* property placeholders or SpEL.

The effort is largely still a WIP (it began with SDG 1.9 M1), but I
announced this to the world to largely garner feedback as I develop the new
features leading up to 1.9 GA.

All my progress to-date, which exceeds my demos now, will be included in
the next *Spring Data Geode* 1.0.0.APACHE-GEODE-INCUBATING-M3 release,
which will be released soon.


> John, thanks again for your time and attention.
>

Always, and your welcome.


> We need to get comfortable and confident using Geode/Spring-Data-Geode and
> validate our expectations soon or move on to an alternative.
>
>
Understandable.  I always believe in picking the right tool for the job,
never compromising your engineering objectives/requirements in order to
make it fit the limitations of any technology.

Having said that, I also believe certain technologies introduce new
programming paradigms (e.g. Reactive, functional programming) or new ways
of solving problems that require some fundamental shifts in our approaches
and our expectations, or at least thinking in terms of how we solve a
problem. We can no longer limit our selves to certain designs.

Consider In-Memory Computing and Data Grids (w.r.t. the JVM Heap), Cloud
[Native] Computing building Microservices adopting a PaaS solution and the
patterns needed for consistency, high availability, reliability, and so on
and so forth.  All of these things will challenge and require us to think
differently how problems are solved.

So, just know that you are good hands with Pivotal and with the
*Spring* engineering
community in particular (speaking from profound experience) along with
comfort knowing *Spring* an GemFire are under the same roof.  I am biased,
of course, but I don't get commission if you ultimately decide to go one
way or another.

Ultimately, I just want to make our users' experience with our projects
(especially, our OSS *Spring* tech, where my focus lies) the best out there
(second to none) when building applications.  Users should be having fun
creating things, building things, not battling technical hurdles.

Cheers and good fortune.


-Roger
>
>
>
> From: John Blum <jb...@pivotal.io>
> Reply-To: "user@geode.incubator.apache.org" <user@geode.incubator.apache.
> org>
> Date: Friday, September 9, 2016 at 7:18 PM
> To: "user@geode.incubator.apache.org" <user@geode.incubator.apache.org>
> Subject: Re: Why am I getting a LocalRegion when I should be getting
> ProxyRegion?
>
> Hi Roger-
>
> See comments in-line...
>
> On Fri, Sep 9, 2016 at 5:25 PM, Roger Vandusen <
> roger.vandu...@ticketmaster.com> wrote:
>
>> Using latest version .M3.
>>
>> clientCache = new ClientCacheFactory()
>>
>>     .addPoolLocator( getLocatorUrl(), getLocatorPort() )
>>     .set( "log-level", getLogLevel() )
>>     .create();
>>
>> ClientRegionFactory<String, PdxInstance> clientRegionFactory =
>>
>>     getClientCache()
>>         .<String, 
>> PdxInstance>createClientRegionFactory(ClientRegionShortcut.PROXY);
>>
>> Region<String, PdxInstance> region = 
>> clientRegionFactory.create(regionName.getName());
>>
>> The problem: Region is a instance of internal LocalRegion not ProxyRegion?
>>
>>
> Why do you think this is a problem?
>
> Technically, it has more to do with a *Region's* DataPolicy
> <http://geode.incubator.apache.org/releases/latest/javadoc/com/gemstone/gemfire/cache/DataPolicy.html>
>  [1]
> than the actual class type of the (client) *Region's *implementation
> (which can be misleading as you have just discovered).  See here
> <https://github.com/apache/incubator-geode/blob/rel/v1.0.0-incubating.M3/geode-core/src/main/java/com/gemstone/gemfire/internal/cache/GemFireCacheImpl.java#L4953-L4960>
>  [2],
> for instance.
>
> In fact, you would not even be able to define a client "PROXY
> <http://geode.incubator.apache.org/releases/latest/javadoc/com/gemstone/gemfire/cache/client/ClientRegionShortcut.html#PROXY>"
> [3] *Region* and perform *Region* operations (e.g. gets/puts) if the
> corresponding *Region* (by *name*) did not exist on the GemFire Server to
> which the cache client (application) is connected.  I.e. GemFire would
> throw an error...
>
> com.gemstone.gemfire.cache.client.*ServerOperationException*: remote
> server on 
> 172.28.128.1(GeodeClientApplication:16387:loner):63975:b155a811:GeodeClientApplication:
> *While performing a remote get*
> at com.gemstone.gemfire.cache.client.internal.AbstractOp.
> processObjResponse(AbstractOp.java:293)
> at com.gemstone.gemfire.cache.client.internal.GetOp$
> GetOpImpl.processResponse(GetOp.java:152)
> at com.gemstone.gemfire.cache.client.internal.AbstractOp.
> attemptReadResponse(AbstractOp.java:175)
> at com.gemstone.gemfire.cache.client.internal.AbstractOp.
> attempt(AbstractOp.java:378)
> at com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(
> ConnectionImpl.java:274)
> at com.gemstone.gemfire.cache.client.internal.pooling.
> PooledConnection.execute(PooledConnection.java:328)
> at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.
> executeWithPossibleReAuthentication(OpExecutorImpl.java:937)
> at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(
> OpExecutorImpl.java:155)
> at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(
> OpExecutorImpl.java:110)
> at com.gemstone.gemfire.cache.client.internal.PoolImpl.
> execute(PoolImpl.java:700)
> at com.gemstone.gemfire.cache.client.internal.GetOp.execute(GetOp.java:97)
> at com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.get(
> *ServerRegionProxy*.java:112)
> at com.gemstone.gemfire.internal.cache.LocalRegion.findObjectInSystem(
> LocalRegion.java:2919)
> at com.gemstone.gemfire.internal.cache.LocalRegion.
> nonTxnFindObject(LocalRegion.java:1539)
> at com.gemstone.gemfire.internal.cache.LocalRegionDataView.findObject(
> LocalRegionDataView.java:155)
> at com.gemstone.gemfire.internal.cache.LocalRegion.get(
> LocalRegion.java:1411)
> at com.gemstone.gemfire.internal.cache.LocalRegion.get(
> LocalRegion.java:1347)
> at com.gemstone.gemfire.internal.cache.LocalRegion.get(*LocalRegion*
> .java:1329)
> at com.gemstone.gemfire.internal.cache.AbstractRegion.get(*AbstractRegion*
> .java:282)
> at example.app.geode.client.GeodeClientApplication.sendEchoRequest(
> GeodeClientApplication.java:138)
> at example.app.geode.client.GeodeClientApplication.run(
> GeodeClientApplication.java:87)
> at example.app.geode.client.GeodeClientApplication.run(
> GeodeClientApplication.java:76)
> at example.app.geode.client.GeodeClientApplication.run(
> GeodeClientApplication.java:57)
> at example.app.geode.client.GeodeClientApplication.main(
> GeodeClientApplication.java:48)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> Caused by: com.gemstone.gemfire.cache.*RegionDestroyedException*: Server
> connection from [identity(172.28.128.1(GeodeClientApplication:16387:
> loner):63975:b155a811:GeodeClientApplication,connection=1; port=63975]: 
> *Region
> named /Echo/Echo was not found during get request*
> at com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.
> writeRegionDestroyedEx(BaseCommand.java:642)
> at com.gemstone.gemfire.internal.cache.tier.sockets.command.
> Get70.cmdExecute(Get70.java:153)
> at com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.execute(
> BaseCommand.java:146)
> at com.gemstone.gemfire.internal.cache.tier.sockets.
> ServerConnection.doNormalMsg(ServerConnection.java:783)
> at com.gemstone.gemfire.internal.cache.tier.sockets.
> ServerConnection.doOneMessage(ServerConnection.java:913)
> at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.run(
> ServerConnection.java:1180)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at com.gemstone.gemfire.internal.cache.tier.sockets.AcceptorImpl$1$1.run(
> AcceptorImpl.java:555)
> at java.lang.Thread.run(Thread.java:745)
>
>
> This error occurs when I try to do echoRegion.get("key") and the *Region*
> does not exist on the server.  No error is reported if I do NOT use the
> client PROXY *Region*, even if it does not exist on the sever when
> created on the client.
>
> Only when I attempt to "use" the Region in a particular way (e.g. data
> access) does something happen... the client tries to communicate with the
> server based on the pool settings.
>
> Also, not all *Region* operations (e.g. isEmpty()/size()) cause a server
> operation to occur.  In the case of isEmpty()/size(), these are "locally"
> based operations and only have values when the local *Region* stores
> data, whether on client as a CACHING_PROXY or on a peer as a
> Local-only/Non-Distributed Region, a REPLICATE or PARTITION Region and so
> on.
>
> For instance, it is even possible to define a server-side *Region* that
> is only a "Data Accessor
> <http://gemfire.docs.pivotal.io/docs-gemfire/latest/developing/region_options/data_hosts_and_accessors.html>"
> [4] but no local state.  In the "Data Accessor" *Region* case, I believe
> isEmpty() would return *true* and size() would return *0* even though the
> Data Accessor would refer to a peer Region where other data nodes in the
> cluster would actually maintain state.
>
> A PARTITION Region is another good example of a peer (server-side)
> *Region* where the size() would not necessarily reflect the number of
> entries in the "logical" Region since the PARTITION Region's data is
> distributed (i.e. "partitioned"/"sharded") across the cluster.
>
> When the server region has data, client side region.size() returns 0 and 
> region.values() returns empty.
>>
>>
> This is actually an indication that indeed your (client) *Region* is a
> PROXY.  As the Javadoc
> <http://geode.incubator.apache.org/releases/latest/javadoc/com/gemstone/gemfire/cache/client/ClientRegionShortcut.html#PROXY>
>  [3]
> points out, "*A PROXY region has no local state and forwards all
> operations to a server.*"
>
> Also what is the value of "regionName.getName()" in you setup?  Where is "
> regionName" coming from?
>
>> What is wrong here that I can't access my server region from the defined 
>> client proxy region?
>>
>>
> How do you mean?  What Region "operations" on the client have you tried?
>
> By way example, I have a GeodeServerApplication
> <https://github.com/jxblum/contacts-application/blob/master/configuration-example/src/main/java/example/app/geode/server/GeodeServerApplication.java>
>  [5]
> and a GeodeClientApplication
> <https://github.com/jxblum/contacts-application/blob/master/configuration-example/src/main/java/example/app/geode/client/GeodeClientApplication.java>
>  [6] you can run.
>
> Play around with un/commenting the creation
> <https://github.com/jxblum/contacts-application/blob/master/configuration-example/src/main/java/example/app/geode/server/GeodeServerApplication.java#L133>
>  [7]
> of the "/Echo" PARTITION Region on the server and executing or
> un/commenting the following lines
> <https://github.com/jxblum/contacts-application/blob/master/configuration-example/src/main/java/example/app/geode/client/GeodeClientApplication.java#L90-L92>
>  [8]
> (client PROXY Region data access ops (i.e. get)) in the cache client
> application.
>
> You will witness when the Exception I noted above occurs and does not.
> For instance, when line 133 in the server application is commented out
> (thus preventing the creation of the /Echo PARTITION *Region*) and I have
> lines 91-93 commented on the client (even though the client still creates
> the corresponding /Echo PROXY Region), so long as I do not perform the
> *Region* ops in lines 91-93, no Exception occurs.  If I uncomment lines
> 91-93 in the client before allowing the creation of the /Echo *Region* on
> line 133 in the server, I get the error.  But when the *Region* exists on
> the server, no problem.
>
> In all cases, the client /Echo PROXY Region isEmpty() will be * true* and
> size() will be *0*, even after the corresponding *Region* (data access)
> ops have been performed, as my assertions
> <https://github.com/jxblum/contacts-application/blob/master/configuration-example/src/main/java/example/app/geode/client/GeodeClientApplication.java#L94-L95>
>  [9]
> indicate.
>
> However, that does not mean the corresponding server * Region* does not
> have any state...
>
> gfsh>connect
> Connecting to Locator at [host=localhost, port=10334] ..
> Connecting to Manager at [host=172.28.128.1, port=1099] ..
> Successfully connected to: [host=172.28.128.1, port=1099]
>
> gfsh>list members
>          Name          | Id
> ---------------------- | ------------------------------
> -------------------------
> GeodeServerApplication | 172.28.128.1(GeodeServerApplication:16732)<
> ec><v0>:1024
>
> gfsh>list regions
> List of regions
> ---------------
> *Echo*
>
> gfsh>describe region --name=/Echo
> .........................................................................
> Name            : *Echo*
> Data Policy     : partition
> Hosting Members : GeodeServerApplication
>
> Non-Default Attributes Shared By Hosting Members
>
>
>  Type  |     Name     | Value
> ------ | ------------ | ----------------------------------------------
> Region | data-policy  | PARTITION
> *       | size         | 3*
>        | cache-loader | example.app.geode.cache.loader.EchoCacheLoader
>
>
> -Roger
>>
>>
>>
>>
> Hope this helps!
>
> Cheers,
> -John
>
> [1] http://geode.incubator.apache.org/releases/latest/
> javadoc/com/gemstone/gemfire/cache/DataPolicy.html
> [2] https://github.com/apache/incubator-geode/blob/rel/v1.0.
> 0-incubating.M3/geode-core/src/main/java/com/gemstone/
> gemfire/internal/cache/GemFireCacheImpl.java#L4953-L4960
> [3] http://geode.incubator.apache.org/releases/latest/
> javadoc/com/gemstone/gemfire/cache/client/ClientRegionShortcut.html#PROXY
> [4] http://gemfire.docs.pivotal.io/docs-gemfire/latest/developing/region_
> options/data_hosts_and_accessors.html
> [5] https://github.com/jxblum/contacts-application/blob/
> master/configuration-example/src/main/java/example/app/geode/server/
> GeodeServerApplication.java
> [6] https://github.com/jxblum/contacts-application/blob/
> master/configuration-example/src/main/java/example/app/geode/client/
> GeodeClientApplication.java
> [7] https://github.com/jxblum/contacts-application/blob/
> master/configuration-example/src/main/java/example/app/geode/server/
> GeodeServerApplication.java#L133
> [8] https://github.com/jxblum/contacts-application/blob/
> master/configuration-example/src/main/java/example/app/geode/client/
> GeodeClientApplication.java#L91-L93
> [9] https://github.com/jxblum/contacts-application/blob/
> master/configuration-example/src/main/java/example/app/geode/client/
> GeodeClientApplication.java#L94-L95
>
>


-- 
-John
503-504-8657
john.blum10101 (skype)

Reply via email to