malte-f19 commented on issue #12639:
URL: https://github.com/apache/ignite/issues/12639#issuecomment-3771808455

   Thank you all for your input, much appreciated!
   
   The workaround you suggested, @ptupitsyn, is something we tried as well. But 
that introduced new issues when reading the data. We got exceptions like this:
   ```
   org.apache.ignite.sql.SqlException: Failed to acquire the intention table 
lock due to a conflict [locker=019ba41c-adae-0000-a5b0-32ae00000001, 
holder=019ba41c-ada0-0000-a5b0-32ad00000001, abandoned=false]
   ```
   or
   ```
   org.apache.ignite.sql.SqlException: Failed to acquire a lock due to a 
possible deadlock [locker=019bb69e-a1c1-0000-a5b0-32ae00000001, 
holder=019bb69e-a150-0000-a5b0-32ad00000001]
   ```
   
   Let me try to describe our use-case in more detail. We have several services 
running in a Kubernetes cluster, executing calculation jobs. There's one 
service responsible for orchestrating the job, generating unique job-ids, 
triggering the other services as needed and taking care of persistent data 
storage. Other services either collect data from external sources or run 
calculations on the data itself. The data is collected into a context we used 
to transfer between the services using JSON via REST calls. Since the contexts 
can get huge and not every service needs *all* data to work, we wanted to move 
the data into some kind of cache. Ignite seems to be the perfect system for 
this use-case. We divided the context into several pieces by type (like input 
data, calculated data, etc.) and even smaller chunks using additional data 
points. So we came up with several tables using the job-id and one or two other 
columns as a primary key.
   
   A typical workflow is as follows:
   - A job gets triggered, so the orchestrator-service generates a job-id and 
sends this it to the first data collection service.
   - The collection service gets necessary data and stores it into the 
appropriate Ignite tables using the job-id.
   - After the storage is done, it sends a REST request to the orchestrator 
containing the job-id.
   - The orchestrator-service will read the data and persist it as needed and 
call the next service (again using REST with the job-id).
   - These steps are repeated until the calculation is done.
   
   Since we really want to use Ignite as an in-memory cache, and we don't care 
about persisting the data, we're using `aimem` as a storage engine.
   
   I hope this makes it clear what we want to achieve. If not or you need 
additional information, feel free to ask. I will provide what I can.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to