Hi,
I am using Ignite version 2.3.0, I want to know whether it is possible
to,
(1) store all cache data in the disk (no data in memory at all)
(2) store exclusive set of data in memory and on disk i.e. data stored in
memory should be available in memory only and data stored on disk should be
Jeff,
All node attributes, including environment variables and classpath are
transmitted in discovery messages.
So try to create an environment, where your app won't have this irrelevant
information.
To check, what is included into the node attributes, you can use the
following code:
ClusterNode
(1) You need to read it into ram to access the data ? not sure what you
try to do, but you can set the cached data set kept in ram to very small
(with the expiration policy), but if you just keep it on disk you might
as well just use a database and not use the cache at all.
(2) Not sure what y
Hi Denis,
As per your comment, I can see pages*pageSize rising as entries are put into
the cache - but this metric doesn't come down e.g. when new nodes are added
to the cluster. I assume that the pages remain allocated but with a lower
fill factor.
So pages*pageSize gives a misleadingly pessimis
Collin,
To be able to see data region metrics, you should enable them either in
configuration, or via MXBean.
You can find a note about it here:
https://apacheignite.readme.io/v2.3/docs/memory-metrics
Note, that memory metrics collection should be enabled per data region.
Denis
пт, 5 янв. 2018
Hi Kavin!
Inserts are supported, but transactions are not yet (but they are expected
to be this year).
Make sure, that explicit transactions are not started by Talend.
Denis
пт, 5 янв. 2018 г. в 7:27, Kavin :
> Hi,
>
> I am using talend to insert data from netezza database to ignite cache. The
Hello,
Yes, it is already supported. Please try the following configuration:
I can confirm that I have metrics enabled for my region - I am able to read
allocatedPages, it's just the fillFactor that always seems to return zero.
Colin.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello I try to create table with TEXT type. TEXT data type is not supported
by ODBC driver but I can use types like: VARCHAR or LONGVARCHAR.
Problem is that every time my text is truncating to 64 characters. Why? Haw
Can I configure Apache Ignite / unixODBC to work with normal TEXT type?
I tested
Hi Val,
I have looked at the Ignite code and the way iterator is implemented
confirms my observation.
In org.apache.ignite.internal.processors.query.h2.opt.GridLuceneIndex#query
in line 285 you execute query
docs = searcher.search(query, Integer.MAX_VALUE) instead of docs =
searcher.search(query,
Hi Rajarshi,
Yes, I confirm that the root cause of the issue is QueryParallelism.
Anyway, I would suggest using sqlline tool [1] (which uses Apache Ignite
JDBC Thin driver [2]) or WebConsole [3] instead of H2 Debug Console.
H2 Debug Console is a legacy tool and does not support all capabilities
pr
Please ignite my above comment, I am now able to retrieve the factor.
As such, is the following correct?
memoryUsed = pages * pageSize * pagesFillFactor
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I have given this (pages * pageSize * pagesFillFactor) a go now, but it
doesn't seem to be returning the values I'm expecting. In particular, the
value can drop significantly even when data is being inserted into the
cache.
Am I using pagesFillFactor incorrectly?
--
Sent from: http://apache-ign
Hello, guys.
Currently `getPreferredLocations` implemented in
`IgniteRDD -> IgniteAbstractRDD`.
But DataFrame implementation uses
`IgniteSQLDataFrameRDD -> IgniteSqlRDD -> IgniteAbstractRDD`
Where `->` is extension.
So, for now, getPreferredLocation doesn't implemented for a
IgniteDataFrame.
I will try to see how I can avoid Talend to do an insert instead of a
transaction.
Are updates supported?
I was trying to insert a record in an empty table. I got an error that
"Updates are not supported"
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
We are thinking about moving our main databases from MySQL Cluster /
Aerospike / Elastic search to Ignite and only two things prevent us from
doing so:
- upgrading to new versions without down time (rolling upgrades)
- creating backups ('snapshots')
I know we can get paid options for this (and yes
The suggested solution is to have ignite-log4j as your dependency and I
believe then java.util.logging is only used as last resort if no logj4.xml
is found at IGNITE_HOME
It seems that IgniteJDBCDriver always calls JavaLogger by default which
looks at the java.util.logging.properties file.
privat
thanks Denis.
Can you please tell me how a try-with-resources works in Ignite ?
On Thu, Jan 4, 2018 at 8:57 PM, Denis Mekhanikov
wrote:
> Hi Rajarshi!
>
> Well, it must be, that you are actually closing the cache.
> Check, that you didn't put cache creation into a try-with-resources.
> When *Ign
Updates are supported. You're probably using an older version of Ignite.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
When using IGFS with a secondary file system, with write behind configured
by using DUAL_ASYNC IgfsMode, is there any way to force the flush of the
data from the Ignite caches into the secondary file system? A possible
scenario here might be a temporary cluster with Ignite installed, that uses
Hello Paulus,
That’s true that Ignite misses these capabilities at the moment and there are
no any movements on the community side or sponsors/committers who would like to
bring the features to the project.
Probably the situation will be changed in the future, so let’s wait and see how
it goes
Hi Illya,
I have compiled the source for 2.3 after applying that patch, and now I can
use IGFS configured with values like and , and I have been able to run some
benchmarks without exceptions or errors.
Thanks a lot for your help.
Greetings,
Juan
On Mon, Dec 18, 2017 at 5:02 AM, Ilya Kasna
22 matches
Mail list logo