You should be able to include a canned flow.xml.gz in your in your
container, just have nothing under the root group.
On Mon, Feb 26, 2018 at 3:50 PM, Matt Gilman wrote:
> Daniel,
>
> Unfortunately, there is no way to set this currently. This is ultimately a
> lifecycle issue. The UUID of the ro
Hello,
Your custom processor would be the same as if you were writing an
external client program.
You would need to provide the processor with a username and password
in the processor properties, and then it would need to make a call to
the token REST end-point.
Processors don't run as the user
Making a call to "/process-groups/root" should retrieve the root
process group which should then have an id element.
On Mon, Feb 26, 2018 at 5:20 PM, Daniel Hernandez
wrote:
> Thanks Matt,
>
> I get now what is the problem, in order to exhaust all my possibilities I
> may ask, is there a way usi
Doug,
I think the only solution is what you proposed about fixing the
nifi-gcp-bundle...
Basically, if a NAR needs a different version of a dependency that is
already declared in the root pom's dependencyManagement, then the
bundle's pom needs it own dependencyManagement to force it back to the
s
NiFi is not a single WAR that can be deployed somewhere. You should
think of it like other software that you install on your system, for
example a relational database. You wouldn't expect to deploy your
Postgres DB to your WildFly server.
On Wed, Mar 7, 2018 at 9:00 AM, Mike Thomsen wrote:
> Most
+1
On Fri, Mar 9, 2018 at 3:11 PM, Joe Witt wrote:
> +1
>
> On Mar 9, 2018 3:10 PM, "Scott Aslan" wrote:
>
> All,
>
> Following a solid discussion for the past couple of weeks [1] regarding the
> establishment of Fluid Design System as a sub-project of Apache NiFi, I'd
> like to
> call a formal
Toivo,
The password property on DBCPConnectionPool is a "sensitive" property
which means it is already encrypted in the flow.xml.gz using
nifi.sensitive.props.key.
Are you saying you are trying to externalize the value outside the
flow and keep it encrypted somewhere else?
-Bryan
On Mon, Mar 1
You may want to consider moving from templates to NiFi Registry for
your deployment approach. The idea of this approach is that your flow
will get saved to registry with no sensitive values, when you import
the flow to the next environment you enter the sensitive values there
the first time and the
Toivo,
I think there needs to be some improvements around variables &
sensitive property handling, but it is a challenging situation.
Some things you could investigate with the current capabilities..
- With the registry scenario, you could define a DBCPConnectionPool at
the root process group of
ave to depend on the nifi-aws-nar.
>
>
> On March 2, 2018 at 13:40:21, Bryan Bende (bbe...@gmail.com) wrote:
>
> Doug,
>
> I think the only solution is what you proposed about fixing the
> nifi-gcp-bundle...
>
> Basically, if a NAR needs a different version of a dependen
What would be the main use case for wanting all the flattened values
in attributes?
If the reason was to keep the original content, we could probably just
added an original relationship.
Also, I think FlattenJson supports flattening a flow file where the
root is an array of JSON documents (althou
script) but then
> would be nice to use this standard processor and instead of writing this to a
> flow content write it to attributes.
>
>
> Jorge Machado
>
>
>
>
>
>> On 20 Mar 2018, at 14:47, Bryan Bende wrote:
>>
>> What would be the main us
t;>> def attrs = [:] as Map
>>> session.read(flowFile,
>>>{ inputStream ->
>>>def text = IOUtils.toString(inputStream, StandardCharsets.UTF_8)
>>>def obj = slurper.parseText(text)
>>>obj.each {k,v ->
>>>i
luateJsonPath the problem with
>>> that is, that is hard to build something generic if we need to specify each
>>> property by his name, that’s why this idea.
>>>
>>> Should I make a PR for this or is this to business specific ?
>>>
>>>
>>> Jo
7, Jorge Machado (jom...@me.com) wrote:
>>>>
>>>> So that is what we actually are doing EvaluateJsonPath the problem with
>>>> that is, that is hard to build something generic if we need to specify
>> each
>>>> property by his name, that’s why t
+1 (binding)
- Ran through release helper and everything checked out
- Verified some test flows with the restricted components + keytab CS
On Fri, Mar 23, 2018 at 2:42 PM, Mark Payne wrote:
> +1 (binding)
>
> Was able to verify hashes, build with contrib-check, and start up
> application. Per
-- Mike
>>
>>
>> On Fri, Mar 23, 2018 at 4:02 PM, Scott Aslan
>> wrote:
>>
>> > +1 (binding)
>> >
>> > - Ran through release helper
>> > - Setup secure NiFi and verified a test flow
>> >
>> > On Fri, Mar 23, 2018 a
t;>>>
>>>> I confirm the issue mentioned by Bryan. That's actually what Matt and I
>>>> experienced when trying the PR about the S2S Metrics Reporting task [1]. I
>>>> thought it was due to my change but it appears it's not the case.
>>
Hello,
What version of NiFi are you using?
This should be fixed in 1.5.0:
https://issues.apache.org/jira/browse/NIFI-4639
Thanks,
Bryan
On Sun, Mar 25, 2018 at 6:45 PM, Milan Das wrote:
> Hello Nifi Users,
>
> Apparently, it seems like PublishKafkaRecord_0_10 doesn't embed schema even
> if
Hello,
Passing LDAP credentials in plain-text over http would not be secure.
You'll want to have the SSL connection pass through the load balancer
all the way to the NiFi nodes.
There are several articles on setting up a secure NiFi cluster:
https://pierrevillard.com/2016/11/29/apache-nifi-1-1-
Can you share the code for your AbstractRedisProcessor?
On Mon, Mar 26, 2018 at 9:52 AM, Mike Thomsen wrote:
> Over the weekend I started playing around with a new processor called
> PutRedisHash based on a request from the user list. I set up a really
> simple IT and hit a problem pretty quickl
I can't tell for sure, but the stacktrace looks like your
AbstractRedisProcessor is making a direct call to RedisUtils to create
a connection, rather than using the RedisConnectionPool to obtain a
connection.
On Mon, Mar 26, 2018 at 11:38 AM, Bryan Bende wrote:
> Can you share the code
You might be able to get the nifi-kafka-0-10-nar from 1.5.0 and run it in 1.4.0.
On Mon, Mar 26, 2018 at 11:28 AM, Milan Das wrote:
> Hi Bryan,
> We are using NIFI 1.4.0. Can we backport this fix to NIFI 1.4?
>
> Thanks,
> Milan Das
>
> On 3/26/18, 11:26 AM, "Bryan Be
= redisConnectionPool.getConnection();
On Mon, Mar 26, 2018 at 11:58 AM, Mike Thomsen wrote:
> Yeah, it does. Copied withConnection from the state provider. Looks like
> copya pasta may have struck again...
>
> On Mon, Mar 26, 2018 at 11:44 AM, Bryan Bende wrote:
>
>> I can
Brian,
Is your custom processor using the MongoDBClientService provided by
NiFI's standard services API? or does your NAR have a parent of
nifi-standard-services-api-nar to use other services?
Looking at where the Mongo JARs are from a build of master...
find work/nar/ -name *mongo-java*.jar
wor
't see a direct dependency
>>>>> on nifi-standard-services, and if the nifi-mongodb-client-service-api
>>>>> needs the nifi-standard-services-api as a dependency, it can make it a
>>>>> NAR dependency.
>>>>>
>>>>> W
on are
packaged separately, which is slightly different than what I was suggesting for
the Mongo case.
> On Mar 26, 2018, at 10:06 PM, Bryan Bende wrote:
>
> I’m a +1 for moving the Mongo stuff out of standard services.
>
> Controller service APIs should always be broken out into thei
I'm not sure that would solve the problem because you'd still be
limited to one directory. What most people are asking for is the
ability to use a dynamic directory from an incoming flow file.
I think we might be trying to fit two different use-cases into one
processor which might not make sense.
gt;
> So...
>
> nifi-foo-service-impl-nar + nifi-foo-processors-nar ---depend on--->
> nifi-foo-service-api-nar
> ---depends on---> nifi-standard-services-api-nar
>
> That's what I should look for in the ES NARs?
>
> On Mon, Mar 26, 2018 at 10:31 PM, Bryan Bende wrote
+1 (binding)
- Verified everything in the release helper
- Verified the fix for the fingerprinting issue
- Successfully ran some test flows with record processors, HDFS
processors, and the keytab CS
On Tue, Mar 27, 2018 at 10:54 AM, Jeff Zemerick wrote:
> +1 non-binding
>
> Built successfully a
Since it sounds like each query is a JSON document, can you create a
JSON array of all your queries and put that as the Custom Text of a
GenerateFlowFile processor?
Then follow it with SplitJson to split the array into a query per flow
file, assuming that is what you want.
Could also use ExecuteS
> > >> /alopresto.apa...@gmail.com <mailto:alopresto.apa...@gmail.com>/
> > >> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69
> > >>
> > >>> On Mar 27, 2018, at 8:33 AM, Andrew Grande > >>> <mailto:apere..
best to output the results into a structured format, such
> as AVRO? Or, maybe it would just be best to output one flowfile per remote
> file found, and include updated time and fully qualified path as attributes?
>
> Scott
>
>
> On 03/29/2018 04:32 AM, Bryan Bende wrote:
>
+1 (binding) Release this package as nifi-1.6.0
- Ran through release helper
- Ran sample flows and tested granular restricted components with
versioned flows
On Wed, Apr 4, 2018 at 4:40 PM, Matt Gilman wrote:
> +1 (binding) Release this package as nifi-1.6.0
>
> - Ran through release help
> - V
Add a RecordField to the RecordSchema where the DataType is a
RecordDataType... a RecordDataType then has a child schema.
May be helpful to look at the code that converts between RecordSceham
and Avro schemas:
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-extension-utils/nifi-r
Hello,
DetectDuplicate uses a DistributedMapCacheClientService which would be
connecting to a DistributedMapCacheServer on one of your nodes.
So all nodes should be connecting to the same cache server which is
where the information about previously seen data is stored.
-Bryan
On Tue, Apr 10, 20
The example processor you showed won’t work because you are calling
getLogger() inline as part of the variable declaration.
The logger is given to the processor in an init method which hasn’t been
called yet at that point, so that is assigning null to the variable.
Generally you should just call
Hello,
I have no idea how AWS works, but most likely what is happening is the
Hadoop client in NiFi asks the name node to write a file, and the name
node then responds with the data nodes to write to, but it is
responding with the private IPs/hostnames of the data nodes which you
can't reach from
I'm not sure if this helps, but you mentioned not being able to use
the variable.registry because it requires a restart.
That is true for the file-based variable registry, however it is not
true for the UI-based variable registry [1].
Keep in mind that neither of the variable registries are reall
Max,
Thanks for reporting this. I've only glanced at the code quickly, but
I see what you are saying about the boolean never getting set to false
when it hits the context.yield().
I would recommend creating a JIRA with all of this info, and then you
could submit your proposed fix as a pull reques
Jorge,
Currently variables are not meant to store sensitive information, the
reason has to do with how users access variables...
The way a user accesses a variable is via expression language, and
since EL is just free from text entered into a property descriptor, it
is impossible to restrict whic
Hello,
Others who have worked on the DB related services and processors can
correct me if I'm wrong here, but...
In general the idea of a connection pool is that creating connections
is somewhat expensive, and for a high-volume of operations you don't
want to create a connection for each DB opera
The issue here is more about the service API and not the implementations.
The current API has no way to pass information between the processor and
service.
The options boil down to...
- Make a new API, but then you need all new processors that use the new API
- Modify the current API to have a
e relevant
> one.
>
> Thanks,
> Sivaprasanna
>
> On Wed, 25 Apr 2018 at 6:07 PM, Bryan Bende wrote:
>
>> The issue here is more about the service API and not the implementations.
>>
>> The current API has no way to pass information between the processor and
>&g
rt to provide a new service implementation that
used the attribute map to somehow manage multiple connection pools, or
create connections on the fly, or whatever the desired behavior is.
On Wed, Apr 25, 2018 at 9:34 AM, Bryan Bende wrote:
> To Otto's question...
>
> For simplicity s
t to
> the router which returns back the connection pool.
>
> On Wed, Apr 25, 2018 at 9:48 AM, Bryan Bende wrote:
>
>> Here is a proposal for how to modify the existing API to support both
>> scenarios:
>>
>> https://issues.apache.org/jira/browse/NIFI-5121
>>
be using
>'UpdateAttribute' to do add that attribute to flowfile?
>2. If we are to use 'UpdateAttribute' to set the value for 'db.id', we
>need to know before hand, right?
>
> -
>
> Sivaprasanna
>
> On Wed, Apr 25, 2018 at 8:38
There is definitely room for improvement here.
Keep in mind that often the sensitive information is specific to a
given environment. For example you build a flow in dev with your
db.password. You don't actually want your dev db password to be
propagated to the next environment, but you do want to
ts to configure nifi without knowing the “value” of
> the secure db password ( for example ), but that doesn’t mean they
> don’t have there rights to reference it.
>
>
>
> On April 25, 2018 at 14:15:16, Bryan Bende (bbe...@gmail.com) wrote:
>
> There is definitely room for impro
hen checked, will consider that as an EL and evaluate from the variable
> registry and when not checked, assume that the entered password is plain
> password and no evaluation needs to happen.
>
> -
>
> Sivaprasanna
>
> On Thu, Apr 26, 2018 at 12:19 AM, Bryan Bende wrote:
In case it helps, Matt B was nice enough to quickly put up a PR for
the API change discussed yesterday:
https://github.com/apache/nifi/pull/2658
Once this is tested and merged it make it would make it easier to
implement Charlie's approach without needing a new interface.
On Thu, Apr 26, 2018 a
Anthony,
Each binary artifact that NiFi publishes must account for the binary
artifacts that it includes.
Each NAR is published to Maven central on a release, so each NAR
potentially needs a LICENSE/NOTICE.
The overall NiFi assembly includes all NARs, so the LICENSE/NOTICE in
nifi-assembly is th
These modules were moved when you pulled from master so these are the
leftover traces of things that aren't under version control, they are
all things in target and .iml files so you just have to blow them
away.
On Mon, Apr 30, 2018 at 9:55 AM, Sivaprasanna wrote:
> This is the maven unable to id
Hello,
I think to store off the flow files you would also need to store the
session it came from, but I would probably question whether this is
really the best idea...
What type of data are you expecting to come into your processor?
1) If you can leverage the record reader concept in NiFi this w
Hello,
A ‘QuerySplunk’ processor that allowed incoming flow files probably makes sense.
If you want to work on this feel free to create a JIRA. I don’t see any
existing tickets for Splunk related processors.
Thanks,
Bryan
> On May 2, 2018, at 3:56 AM, Brajendra Mishra
> wrote:
>
>
> Hi Te
I don't know the history of this particular processor, but I think the
purpose of the session.get() with batches is similar to the concept of
@SupportsBatching. Basically both of them should have better
performance because you are handling multiple flow files in a single
session. The supports batch
Hello,
When a node joins the clusters, if the node has an empty flow.xml, no
users, and no authorizations, then the node will inherit all of those
from the cluster, but if any of those are populated then it won't be
able to join.
One common issue that prevents this from working, is if you have an
e the other process groups from the canvas that I do not
> have permissions to?
>
> Thanks
> Anil
>
>
>
> On Mon, May 14, 2018 at 3:10 PM, Anil Rai wrote:
>
>> Thanks for the detailed explanation Bryan.
>>
>> Cheers
>> Anil
>>
>>
>> O
+1 (binding)
On Fri, May 18, 2018 at 10:18 AM, James Wing wrote:
> +1 (binding)
>
>> On May 18, 2018, at 6:30 AM, Rob Moran wrote:
>>
>> Following positive response discussing the name change of nifi-fds [1], I'd
>> like to call a vote to officially change the name of the Apache NiFi Fluid
>> De
The max connection warning is simply based on the property in the
processor called "Max Number of TCP Connections" which defaults to 2.
So in the default case, if a third connection is made while two
connections are open, then the third connection is rejected and you
see this warning.
If you are s
Jeff,
What you described sounds correct. Can you share which dependency
versions now need to be specified? We could at least update the
processor bundle archetype to have these versions specified to make it
easier for new bundles, and maybe add something to the migration notes
for existing bundles
Hello,
If I'm understanding the situation correctly, you want ordering within
a key, but not necessarily total ordering across all your data?
I'm making this assumption since you said you have 9 partitions on
your Kafka topic and you are partitioning by key, so the data for each
key is in order p
I was looking at EnforceOrder again and I'm not sure that will
actually help here since I don't think it works across a cluster, but
maybe others know more.
I think you can only ever have 1 concurrent task for your PublishKafka
processor. Even if you run everything on primary node, if you have 2
c
Hello,
Processors can't leverage NiFi's internal authentication and
authorization mechanisms.
For HandleHttpRequest it supports two-way TLS for client authentication.
-Bryan
On Mon, Jun 4, 2018 at 2:11 PM, Anil Rai wrote:
> Team, for invoking the nifi API's on a secured cluster, we have to ge
Congrats! and thank you for your contributions to the NiFi community.
On Tue, Jun 5, 2018 at 10:16 AM, Kevin Doran wrote:
> Congrats, Sivaprasanna!
>
> On 6/5/18, 10:09, "Tony Kurc" wrote:
>
> On behalf of the Apache NiFI PMC, I am very pleased to announce that
> Sivaprasanna has accept
Paresh,
Mark can correct me if I'm wrong, but I believe the information
fetched in step 1 is persisted in-memory on each node where the RPG is
running. This information is then periodically refreshed in a
background thread.
When data is flowing through it is distributing the data to the nodes
in
Mark,
The resources end-point returns all of possible resource identifiers
that can be used to create policies. If you look in authorizations.xml
on the policies and see things like "/flow" or"/controller", those are
examples of resources.
I'm not sure that NiFi itself uses it for anything, but i
dded to all policies
> available through the UI, and /resources does not appear in
> authorizations.xml.
>
> Thanks,
> -Mark
>
> On Thu, Jun 7, 2018 at 8:37 AM, Bryan Bende wrote:
>
>> Mark,
>>
>> The resources end-point returns all of possible r
Using the versioned flow logic seems like a good idea.
Would the authorizer fingerprints still be checked as part of joining
the cluster?
Currently that is appended to the overall fingerprint to ensure each
node has the same users/policies, or at least same config (i.e. LDAP).
Would be nice if a
Peter,
There really shouldn't be any non-source processors scheduled for
primary node only. We may even want to consider preventing that option
when the processor has an incoming connection to avoid creating any
confusion.
As long as you set source processors to primary node only then
everything
ack to a single node whenever necessary
>> which is the case in certain scenarios like fork/distribute/process/join/send
>> and things like distributed receipt then join for merging (like
>> defragmenting data which has been split). To join them together we need
>> affini
Lets make sure this is mentioned in the migration notes wiki for 1.7.0
since it will create a ghost component for existing flows that have
the ES service with the original name.
On Mon, Jun 11, 2018 at 9:16 AM, Mike Thomsen wrote:
> Should mention that I did not check the L&N for the ES 6.X versi
Hello,
There is a counter that is incremented when a connection is opened, and
decremented when it is closed. When this counter exceeds the number
configured in the processor, then it rejects the connection with the
message you are seeing. Since you have the max connections set to 1000,
there must
uggest upgrade the version may solve this issue?
>
>
>
> Thanks and Regards,
>
> *Rajesh Biswas* | +91 9886433461 | www.bridgera.com
>
>
>
> *From:* Bryan Bende [mailto:bbe...@gmail.com]
> *Sent:* Friday, June 15, 2018 12:50 AM
> *To:* us...@nifi.apache.org
I can't help you with the Docker part, but there shouldn't be any
major issues setting up secure NiFi and registry.
Andrew Lim put together some great videos that are linked to from the
registry page of the website...
Setting Up a Secure Apache NiFi Registry
https://youtu.be/qD03ao3R-a4
Setting
+1 (binding) Release this package as nifi-registry-0.2.0
- Ran through everything in the release helper and looked good, few minor
things Andy mentioned
- Tested upgrading an existing registry to 0.2.0 to test database migration
- Tested basic event hook logging
- Ran secure NiFi with secure regis
Hello,
In general you probably want to take a look at the "record" processors
which will offer a more efficient way of performing this task without
needing to split to 1 message per flow file.
The flow with the record processors would probably be GetFile ->
ConvertRecord (using CsvReader and Avro
Others may know a better way to do this, but the only way I know to
truly verify the commit id is something like the following:
git clone https://git-wip-us.apache.org/repos/asf/nifi.git
git -C nifi checkout
diff --brief -r
For verifying the RC was branched off the correct git commit id, you
l
and I don't
> think DEPENDENCIES matters either unless I'm missing something)
>
> On Wed, Jun 20, 2018 at 10:11 AM Bryan Bende wrote:
>
>> Others may know a better way to do this, but the only way I know to
>> truly verify the commit id is something like the followin
Mark,
The database directory and flow storage directory are where all the
data are. By default these are created in the root of NiFi Registry,
so depending how you want to set it up you could move those
directories to the new install, or you could set them up to be
external locations so you don't
f using a database location that is external to the
> installation directory, is nifi.registry.db.url the only property that
> needs to be modified?
>
>
> On Wed, Jun 20, 2018 at 11:18 AM Bryan Bende wrote:
>
>> Mark,
>>
>> The database directory and flow storage directo
Hello,
Since the port is configurable and can easily be changed I don't think
we would plan to change it.
There are also lots of people who are not running NiFi Registry on the
same server as Spark History Server, so I don't think changing it just
for that makes sense.
Thanks,
Bryan
On Wed, J
Hello,
Since processors are extensions to the NiFi framework, they don't have
a way to utilize the framework's authentication and authorization.
The only option for HandleHttpRequest is to use 2-way TLS by providing
an SSL Context Service with Client Auth required.
Thanks,
Bryan
On Thu, Jun 2
+1 (binding)
Verified everything in release helper and ran some test flows, thanks
for RM'ing!
On Thu, Jun 21, 2018 at 9:11 AM, Mark Payne wrote:
> +1 (binding).
>
> Thanks for volunteering to handle the RM duties this time around, Andy!
>
> Was able to verify the checksums, build with contrib-c
Hello,
Processors are basically started, stopped, invalid, or disabled.
There aren't really states like failed or completed because a
processor doesn't complete, it just runs until it is told to stop. The
closest thing to a failure is when an error occurs processing a
particular flow file which is
Hello,
I believe this is expected behavior because the special privileges give
read and write to all buckets and take precedence over the bucket
privileges.
Generally not many users/groups would have those special privileges. The
common use case is to give a NiFi server read to all buckets so it
Java assumes there is one krb5.conf file loaded by the JVM. It looks
for the system property java.security.krb5.conf or falls back to
looking in well-known locations, but still only expects one [1].
NiFi requires you to set the location in nifi.properties and uses that
value to set the system prop
Hello,
This is actually expected behavior...
The migration is only setup to migrate the database from 0.1.0 which was
named nifi-registry.mv.db.
If you have an H2 DB named nifi-registry-primary.mv.db this is the name of
the new H2 DB in 0.2.0, so it’s not looking for this because the idea was
yo
> Jagrut
>
> On Sat, Jun 23, 2018 at 2:52 PM, Bryan Bende wrote:
>
> > Hello,
> >
> > This is actually expected behavior...
> >
> > The migration is only setup to migrate the database from 0.1.0 which was
> > named nifi-registry.mv.db.
> >
&g
Jagrut,
I believe you should have access to edit the NiFi Registry wiki now.
Let us know if it doesn't work.
Thanks,
Bryan
On Sat, Jun 23, 2018 at 8:28 PM, Jagrut Sharma wrote:
> Thanks! My confluence username is jagrutsharma.
>
> --
> Jagrut
>
> On Sat, Jun 23, 2018
The authorizers.xml supports many different options, such user group
providers for file-based, ldap, or composite, and policy provider for
file-based and ranger.
The concept of inheritance only really applies to the file-based cases
because in the other scenarios like LDAP and Ranger, the users a
I don't think there are any stability issues with the record API, it
is definitely recommended to use the record approach where it makes
sense.
That comment was probably put there on the first release and never
removed, and now it has been 4-5 releases later.
As a general comment to APIs, the rec
Hello,
In the 0.x versions of NiFi there were specific roles such as DFM, but
in 1.x there are no specific roles and everything is based on
fine-grained access policies.
The initial admin is given the ability to get into the UI and to
create policies, which then lets them grant themselves whateve
Looks like all of the links go to the same test failure for
TestCLICompleter, which seems to be failing because it can't find a
file in src/test/resources [1].
The part of the CLI that is doing to file completion is from the JLine
library, so we don't really have control over making that work on
W
This sounds like a good idea to me.
Just to clarify how this would work, in the file-based policy provider
we'd have something like:
admin
cluster-nodes
During start up the "cluster-nodes" group gets granted permission to /proxy.
Then a separate piece of work would be to implement a
UserGroupPr
Adam,
The ClassSize class comes from hbase-common [1] so I'm not sure how
that would related to the Phoenix client JAR. What version of HBase
was this against?
The only case I know of that needs the Phoenix client jar is when
Phoenix has been installed which then modifies the HBase config files
t
o Pheonix and I do not understand why
> adding the client would help resolve the class.
>
> Would you advise reverting to Java 8 until Java 10 is fully supported?
>
> Thanks,
>
> Adam
>
> On 8/23/18, 11:27 AM, "Bryan Bende" wrote:
>
> Adam,
>
>
error then I'd be stumped.
-Bryan
On Thu, Aug 23, 2018 at 3:16 PM, Martini, Adam wrote:
> Bryan,
>
> Yes, an HBase client upgrade makes sense for the Java 10 upgrade path.
> However, the NoClassDefFoundError is more mysterious and does concern me.
>
> Thanks,
> Ada
Jon,
The RPG URL is treated similar to sensitive properties or parent
controller services, meaning that after deploying your flow you can
issue an update to the RPG to set the URL to given environment's value
and that change will not be considered a change as far as version
control and will be ret
n
> option...both seem to have some issues and inconveniences.
>
> Thanks!
> Jon
>
> On Thu, Sep 6, 2018 at 4:01 PM, Bryan Bende wrote:
>
> Jon,
>
> The RPG URL is treated similar to sensitive properties or parent
> controller services, meaning that after deploying
1 - 100 of 905 matches
Mail list logo