the extra overhead for threads come from ?
--
Regards
Srinivas Devaki
Junior (3rd yr) student at Indian School of Mines,(IIT Dhanbad)
Computer Science and Engineering Department
ph: +91 9491 383 249
telegram_id: @eightnoteight
--
https://mail.python.org/mailman/listinfo/python-list
is a little bit slower when
compared to defaultdict for this kind of purpose.
Regards
Srinivas Devaki
Senior (final yr) student at Indian Institute of Technology (ISM), Dhanbad
Computer Science and Engineering Department
ph: +91 9491 383 249
telegram_id: @eightnoteight
--
https://mail.python.org/mailman/listinfo/python-list
timeit list(myngrams(range(1000), n=100))
1000 loops, best of 3: 1.46 ms per loop
In [12]:
---
Regards
Srinivas Devaki
Senior (4th year) student at Indian Institute of Technology (ISM), Dhanbad
Computer Science and Engineering Departme
On Thu, Nov 10, 2016 at 12:43 PM, srinivas devaki
wrote:
> complexity wise it's O(N), but space complexity is O(N**2) to execute
> this function,
I'm sorry, that is a mistake.
I just skimmed through the itertoolsmodule.c, and it seems like the
space complexity is just O(N), as
no extra advantage over that as
with n=1 tee just returns a wrapper around the iterable.
Regards
Srinivas Devaki
Senior (4th year) student at Indian Institute of Technology (ISM), Dhanbad
Computer Science and Engineering Department
phone: +91 9491 383 249
telegram: @eightnoteight
--
https://mai
ahh, this is the beginning of a conspiracy to waste my time.
PS: just for laughs. not to offend any one.
Regards
Srinivas Devaki
Junior (3rd yr) student at Indian School of Mines,(IIT Dhanbad)
Computer Science and Engineering Department
ph: +91 9491 383 249
telegram_id: @eightnoteight
On Mar 30
://stackoverflow.com/questions/3407505/writing-binary-data-to-middle-of-a-sparse-file
but it only supports if you are constructing the data in file from scratch
and aria2c can resume the download too i.e not from scratch.
--
Regards
Srinivas Devaki
Junior (3rd yr) student at Indian School of Mines,(IIT
I'm so sorry, forgot to lock my phone.
On May 9, 2016 9:01 AM, "srinivas devaki"
wrote:
> f be gfdnbh be b GB GB BH GB vbjfhjb GB bffbbubbv GB hbu hbu
> fjbjfbbbufhbvh VB have fqbgvfb NB bb GB GB GB GB bbu GB vu GB vu GB GB
> b GB fbufjnb BH GB GB bvvfbubfff
f be gfdnbh be b GB GB BH GB vbjfhjb GB bffbbubbv GB hbu hbu
fjbjfbbbufhbvh VB have fqbgvfb NB bb GB GB GB GB bbu GB vu GB vu GB GB
b GB fbufjnb BH GB GB bvvfbubbjubuv GB b fbufbbby GB bfff GB f GB
bbbu GB GB ffinj GB vh vh fjb GB fj GB h h GB gjfthey're the b GB gjf GBG
GBG q GB fb
On May 9, 2016 5:31 AM, "Tim Chase" wrote:
>
> then that's a bad code-smell (you get quadratic behavior as the
> strings are constantly resized), usually better replaced with
>
I just want to point out that in Python s += str in loop is not giving
quadratic behavior. I don't know why but it runs
On Dec 9, 2015 4:45 PM, "Steven D'Aprano" wrote:
>
> Maildir is also *much* safer too. With mbox, a single error when writing
> email to the mailbox will likely corrupt *all* emails from that point on,
> so potentially every email in the mailbox. With maildir, a single error
> when writing will, a
On Dec 9, 2015 3:07 PM, "Anmol Dalmia" wrote:
>
>
> I wish to use the native LZMA library of Python 3.4 for faster performance
> than any other third- party packages. Is it possible to do so?
>
you can check the source of lzma module main compression and decompression
algorithms were written in c
Hi
I'm coming from this link (
https://groups.google.com/forum/#!topic/python-ideas/cBFvxq1LQHM), which
proposes to use long_to_decimal_string(), int_to_decimal_string() functions
for printing integers in different bases.
Now is there anyway i can use such internal functions from pure python
code,
Thank you Chris,
later I decided that this would be cheating and I have to think about
another algorithmic approach.
most of the competitive programming platforms provide python with a time
limit of 5 times of c/c++ time limit. but in many cases like if the
algorithms are recursive(like segment
let's put an end to this.
from math import log
# simple one to understand. complexity: O(n*log(n))
def countzeros_va(n):
count = 0
for x in xrange(1, n + 1):
while x % 5 == 0:
count += 1
x //= 5
return count
# better approach. complexity: O(log(n))
def
You can create a single heap with primary key as timestamp and
secondary key as priority, i.e by creating a tuple
insert the elements into the heap as
(timestamp, priority)
If there is any underlying meaning for creating 2 heaps. please mention.
On Fri, Jan 8, 2016 at 4:22 AM, Sven R. Kunze wr
suggestion.
>
> On 08.01.2016 14:21, srinivas devaki wrote:
>>
>> You can create a single heap with primary key as timestamp and
>> secondary key as priority, i.e by creating a tuple
>> insert the elements into the heap as
>> (timestamp, priority)
>
> I think I can
On Jan 11, 2016 12:18 AM, "Sven R. Kunze" wrote:
> Indeed. I already do the sweep method as you suggested. ;)
>
> Additionally, you provided me with a reasonable condition when to do the
sweep in order to achieve O(log n). Thanks much for that. I currently used
a time-bases approached (sweep each
On Jan 10, 2016 12:05 AM, "Paul Rubin" wrote:
>
> You could look up "timing wheels" for a simpler practical approach that
> the Linux kernel scheduler used to use (I think it changed a few years
> ago).
this is not related to OP's topic
I googled about "timing wheels" and "Linux kernel scheduler
On Wed, Jan 13, 2016 at 4:50 PM, Cem Karan wrote:
>
> Is that so? I'll be honest, I never tested its asymptotic performance, I
> just assumed that he had a dict coupled with a heap somehow, but I never
> looked into the code.
>
I have just tested the code, the aymptotic performance is O(log(n)
@Sven
actually you are not sweeping at all, as i remember from my last post
what i meant by sweeping is periodically deleting the elements which
were marked as popped items.
kudos on that __setitem__ technique,
instead of using references to the items like in HeapDict, it is
brilliant of you to si
On Feb 1, 2016 10:54 PM, "Sven R. Kunze" wrote:
>
> Maybe I didn't express myself well. Would you prefer the sweeping
approach in terms of efficiency over how I implemented xheap currently?
>
complexity wise your approach is the best one of all that I have seen till
now
> Without running some be
is so, is it just to make the code look simple???
Regards
Srinivas Devaki
Junior (3rd yr) student at Indian School of Mines,(IIT Dhanbad)
Computer Science and Engineering Department
ph: +91 9491 383 249
telegram_id: @eightnoteight
--
https://mail.python.org/mailman/listinfo/python-list
On Feb 5, 2016 5:45 AM, "Steven D'Aprano" wrote:
>
> On Fri, 5 Feb 2016 07:50 am, srinivas devaki wrote:
>
> > _siftdown function breaks out of the loop when the current pos has a
valid
> > parent.
> >
> > but _siftup function is not implemented in
On Fri, Feb 5, 2016 at 8:12 PM, Sven R. Kunze wrote:
> On 05.02.2016 02:26, srinivas devaki wrote:
> What do you think about our use-case?
>
Oh, the logic is sound, every element that we have inserted has to be popped,
We are spending some *extra* time in rearranging the elements only t
same.
I'm attaching the files.
do you have any idea why this happened?
On Fri, Feb 5, 2016 at 9:57 PM, Sven R. Kunze wrote:
>
> Can we do better here?
>
I don't know, I have to read TAOP knuth article.
--
Regards
Srinivas Devaki
Junior (3rd yr) student at Indian School of Mine
st level,
the optimization is occurring in that place.
Which makes the reason behind the heapq module's choice of _siftup
code is not at all related to this cause.
PS:
please copy the table to some text editor, for better visualization.
On Fri, Feb 5, 2016 at 11:12 PM, srinivas devaki
wrote:
> wow
ily subclass with just
using self._counts dict in your subclass. but still I think it is good to
introduce it as a feature in the library.
Regards
Srinivas Devaki
Junior (3rd yr) student at Indian School of Mines,(IIT Dhanbad)
Computer Science and Engineering Department
ph: +91 9491 383 249
teleg
On Feb 8, 2016 5:17 PM, "Cem Karan" wrote:
>
> On Feb 7, 2016, at 10:15 PM, srinivas devaki
wrote:
> > On Feb 8, 2016 7:07 AM, "Cem Karan" wrote:
> > > I know that there are methods of handling this from the client-side
(tuples with unique counters come
er you've removed the element.
>
If you can do it with C pointers then you can do it with python's
references/mutable objects. :)
in case of immutable objects, use a light mutable wrapper or better use
list for performance.
Regards
Srinivas Devaki
Junior (3rd yr) student at Indian Scho
)-(\d{2})-(\d{2})-(\d{2})-(\d{2})-(\d{4})',
'myfile-2015-02-09-19-08-45-4223')
In [37]: mat
Out[37]: <_sre.SRE_Match object; span=(0, 31),
match='myfile-2015-02-09-19-08-45-4223'> In [38]: mat.groups()
Out[38]: ('myfile', '2015', '02',
On Feb 10, 2016 7:23 AM, "srinivas devaki"
wrote:
>
>
> On Feb 10, 2016 6:56 AM, "Anthony Papillion"
wrote:
> >
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > Hello Everyone,
> >
> > I am using datetime.now
ldren, each node can be of two types a file and folder.
if you come to think about it this is most intuitive way to represent the
file structure in your program.
you can extract the directory name from the file object by traversing it's
parents.
I hope this helps.
Regards
Srinivas Devaki
Ju
__getitem__ 6
__setitem__ 6 6
But the output that i expected is
__setitem__ 4 6
__getitem__ 4
__getitem__ 6
__setitem__ 4 6
SO isn't it counter intuitive from all other python operations.
like how we teach on how python performs a swap operation???
I just want to get a better idea around this
es
or names and then put the value of rhs in them.
as `a` is a name, so the rhs reference is copied to the a
`roots[a]` is a reference to an object, so it is initialized with the
reference of rhs.
anyway I got it, and all my further doubts are cleared from that
compiled code. I tried some oth
gt; 'this is terrible'
> print \
> 'but still not incorrect
>
> Still terrible. But not quite as useless as a knee-jerk reaction
> might suggest.
>
> I actually hacked together a binary-diff something like this,
> emitting every hex-formatted b
So as the results are not much effected apart of __init__, i think you
should consider this.
Note: i'm not using collections.Counter because it is written in
python, and from my previous experience it is slower than using
defaultdict for this kind of purposes.
ps: there are two er
ou can just use the else case which will work for all cases but if your
npArray2 has such a pattern then the above code will perform better.
Regards
Srinivas Devaki
Junior (3rd yr) student at Indian School of Mines,(IIT Dhanbad)
Computer Science and Engineering Department
ph: +91 9491 383 249
telegram
y/) {
> $tn = $1;
> }
> elsif (/release_req/) {
> print "$tn\n";
> }
> }
>
> Look at those numbers:
> 1 minute for python without precompiled REs
> 1/2 minute with precompiled REs
> 5 seconds with perl.
> --
> https://mail.python.org/mailman/listinfo/python-list
--
Regards
Srinivas Devaki
Junior (3rd yr) student at Indian School of Mines,(IIT Dhanbad)
Computer Science and Engineering Department
ph: +91 9491 383 249
telegram_id: @eightnoteight
--
https://mail.python.org/mailman/listinfo/python-list
On Mon, Nov 2, 2015 at 1:22 PM, Steven D'Aprano
wrote:
>
> So how come Python 3 has line buffered stderr? And more importantly, how can
> I turn buffering off?
>
> I don't want to use the -u unbuffered command line switch, because that
> effects stdout as well. I'm happy for stdout to remain buffe
On Fri, Nov 20, 2015 at 6:39 PM, Chris Angelico wrote:
> My crystal ball suggests that defaultdict(list) might be useful here.
>
> ChrisA
I used something similar to this for some problem in hackerrank,
anyway i think this is what you want.
class defaultlist(object):
def __init__(self, facto
On Fri, Nov 20, 2015 at 11:58 PM, srinivas devaki
wrote:
> def __str__(self):
> if len(self.list) == 0:
> return '(' + str(self.data) + ')[...]'
> return ''.join(['(', str(self.data), ')[
42 matches
Mail list logo