Thank you all! 

> On 31 May 2022, at 5:13 PM, Sergio <lapostadiser...@gmail.com> wrote:
> 
> 
> However, if I were you I would avoid that... Maybe I will place a url to S3 
> or GFS in Cassandra
> 
> Best,
> 
> Sergio
> 
>> On Tue, May 31, 2022, 4:10 PM Sergio <lapostadiser...@gmail.com> wrote:
>> You have to split it by yourself
>> Best,
>> Sergio
>> 
>>> On Tue, May 31, 2022, 3:56 PM Andria Trigeorgis <an.trigeo...@gmail.com> 
>>> wrote:
>>> Thank you for your prompt reply! 
>>> So, I have to split the blob into chunks by myself, or there is any 
>>> fragmentation mechanism in Cassandra? 
>>> 
>>> 
>>>> On 31 May 2022, at 4:44 PM, Dor Laor <d...@scylladb.com> wrote:
>>>> 
>>>>> On Tue, May 31, 2022 at 4:40 PM Andria Trigeorgi <an.trigeo...@gmail.com> 
>>>>> wrote:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> I want to write large blobs in Cassandra. However, when I tried to write 
>>>>> more than a 256MB blob, I got the message:
>>>>> "Error from server: code=2200 [Invalid query] message=\"Request is too 
>>>>> big: length 268435580 exceeds maximum allowed length 268435456.\"".
>>>>> 
>>>>> I tried to change the variables "max_value_size_in_mb" and 
>>>>> "native_transport_max_frame_size_in_mb" of the file 
>>>>> "/etc/cassandra/cassandra.yaml" to 512, but I got a 
>>>>> ConnectionRefusedError error. What am I doing wrong?
>>>> 
>>>> You sent a large blob ;)
>>>> 
>>>> This limitation exists to protect you as a user. 
>>>> The DB can store such blobs but it will incur a large and unexpected 
>>>> latency, not just 
>>>> for the query but also for under-the-hood operations, like backup and 
>>>> repair. 
>>>> 
>>>> Best is not to store such large blobs in Cassandra or chop them into 
>>>> smaller
>>>> units, let's say 10MB pieces and re-assemble in the app. 
>>>>  
>>>>> 
>>>>> Thank you in advance,
>>>>> 
>>>>> Andria
>>> 

Reply via email to