Hi,
I was wondering what is the nicest way to do filter-not in Clojure. Here
are 3 expressions:
user=> (filter #(apply = %) '([1 2] [1 1]))
([1 1])
user=> (filter #(apply not= %) '([1 2] [1 1]))
([1 2])
user=> (filter #(not (apply = %)) '([1 2] [1 1]))
([1 2])
First one is just a base filterin
Hi,
Thanks a lot for hints. "remove" was what I needed.
This
(filter (complement #(apply = %)) '([1 2] [1 1]))
also looks very clean now.
Regards,
Andy
On Thu, Aug 21, 2014 at 1:07 PM, Daniel Solano Gómez
wrote:
> On Thu Aug 21 13:01 2014, Andy C wrote:
> &g
Hi,
I have a short question, why map builds up a LazySeq instead of an input
collection as found below:
user=> (type (map #(mod % 3) #{3 6}))
clojure.lang.LazySeq
user=> (type (map #(mod % 3) '(3 6)))
clojure.lang.LazySeq
user=> (type (map #(mod % 3) [3 6]))
clojure.lang.LazySeq
user=> (type (map
On Fri, Feb 7, 2014 at 7:53 PM, Atamert Ölçgen wrote:
> Why should it build a concrete result?
>
I realize the benefits of using LazySeq and do not have a strong opinion
besides things gotta be consistent. Putting practical advantages and having
a good default behaviour aside, I was wondering i
user=> (map #(mod % 3) #{3 6})
(0 0)
user=> (set (map #(mod % 3) #{3 6}))
#{0}
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be pat
I do perceive sets, lists, vector as atoms which are indivisible (well,
this is not true but this is popular meaning) from semantics standpoint.
Therefore map is just a function which processes them as whole, again from
semantics point of view. Implementation and laziness should not matter
really a
I actually like the laziness by default but as you suggest, wish there is a
way to switch it on/off for blocks of the code (rather than compiler
option). Scala guys did some research and in most practical cases Lists are
very short hence they are not lazy and evaluated at once. Just an
interesting
> Following your intuition, what would you expect from the following?
> > (map + [1 3 5] '(2 4 6))
> # => ?
>
It only gets worse, as the result of below should be undefined (using
classic set definition):
user=> (map + #{0 1} #{0 1})
(0 2)
user=> (map + #{1 0} #{1 0})
(0 2)
--
You received this
On Sat, Feb 8, 2014 at 12:06 AM, Sean Corfield wrote:
> But you're misunderstanding what map does: it converts its collection
> arguments to _sequences_ and then it processes those sequences. Map
> doesn't operate on sets, or vectors, or maps, only on sequences.
>
Your assertion that I "am misun
>Every persistent collection in Clojure supports conversion to the sequence
of items. This is clearly documented in the official docs and there is no
surprise here.
Would you mind to point me to that piece where doc describes what order seq
chooses when converting a set to it. (I honestly tried to
First, thanks everybody for explanations of design decision behind map and
collections. I should in fact change subject to seq semantics ;-).
For me the bottom line is that while I do not care about order so much I
still can count on that seq function will produce consistent sequences. Or
wait a
On Sat, Feb 8, 2014 at 1:46 PM, Jozef Wagner wrote:
> Two collections equivalent by their values may easily have a different
> order of their items.
>
It all boils down this:
is it possible to have two clojure.lang.PersistentHashSet with identical
values (in mathematical sense) but producing
> user> (= s1 s2)
> true
> user> (= (seq s1) (seq s2))
> false
>
Thx. If a=b then f(a) must = f(b). Something is broken here.
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that pos
I can ensure all of you that it is very uncomfortable for a newcomer with a
goofy nick to just come in and say things are broke LOL . So at that point
I have two choices:
1) as suggested, find another programming language but that would mean that
I would have to erase my Clojure tattoo (very painf
On Sun, Feb 9, 2014 at 4:46 AM, Michał Marczyk wrote:
The Contrib library algo.generic provides a function fmap which does
> preserve the type of its input.
>
Thanks for the pointer.
> So, these are some of the available conceptual arguments. There is
> also a rather convincing practical argume
Hi,
There are many performance benchmarks showing that compiled CLISP is almost
as fast as C++ or Clojure as Java.
Being a dynamically typed language though, I wonder how it is possible. Is
it because the compiler is able to sort out actually used types and
assemble appropriate byte code or the c
Thanks for the insight and link to http://benchmarksgame.alioth.debian.org.
WRT dynamically typed languages, I have some 5 years experience with Python
circa 2.4ish timeframe. I remember that a practical raw speed was not that
bad, but still was in average like 10 times slower from C++. Good enoug
On Tue, Feb 18, 2014 at 11:38 PM, Devin Walters wrote:
> You need to use the lein plugin for no.disassemble, not the dependency.
> The README explains how.
>
Thanks - now I can see disassembled code - quit neat. I misread "do not use
this way" as a "following" as opposed to "above" (being not a
> The OP almost certainly intended "CLISP" to mean "Common Lisp".
>
I recall it now - it was Allegro CL which somebody demoed to me almost ten
years ago. I wish I started learning Lisp yet cannot believe that Clojure I
am learning now (and Scala I am actively using) did not exist back then.
--
Y
Hi,
So the other day I came across this
presentation:http://www.infoq.com/presentations/top-10-performance-myths
The guy seems to be smart and know what he talks about however when at
0:22:35 he touches on performance (or lack of thereof) of persistent data
structures on multi-core machines I fee
On Thu, Mar 13, 2014 at 11:24 AM, Timothy Baldridge wrote:
> I talked to Martin after a CodeMesh, and had a wonderful discussion with
> him about performance from his side of the issue. From "his side" I mean
> super high performance.
>
[...]
Hi Tim,
Thanks for explaining the context of Martin'
Maybe one day this idea http://en.wikipedia.org/wiki/Lisp_machine will come
back, I mean in a new form ..
In any case, I think that it would be great to see some performance
benchmarks for STM
A.
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To po
>From what I understand, a single core can easily saturate a memory bus. At
the same time L2 and L3 caches are so small as compared to GB of memory
systems yet growing them does not necessarily help either due to larger
latencies. It all limits the number of practical applications which could
real
> Today, data is scattered
> everywhere in a huge memory,
> its location changes frequently,
> code can be dynamically compiled,
> get loaded by small chunks,
>
It describes my case actually - I am loading about 2-8GB of "stuff" in the
memory and to tons of mapping, grouping by and filtering
> In my personal experience I cannot get within 10X the throughput, or
> latency, of mutable data models when using persistent data models.
>
Hi Martin,
Thanks for finding this thread :-). Let me ask a reversed question. Given
you come from a persistent data model where code remains reasonably sim
Hi,
trying to convert some Scala code I have into Clojure and would like as for
a feedback. Following code is suppose to scan a sequence and combine min,
max, sum and count:
user=> (defn combine [x y] (let [[mi ma su co] x] [(min mi y) (max ma y) (+
su y) (+ 1 co)]))
#'user/combine
user=> (defn m
Thx for hints.
> As for the main function, my tendency would be to avoid taking cases on
> the number of args
>
Is that due to performance implications, I mean that it takes longer to
check cases every time? Or just a style. BTW, I followed
https://github.com/clojure/clojure/blob/master/src/clj/
> As a co-author of the reactive manifesto I'd like to point out that
> "reactive" can be considered a superset of "async". Good reactive
> applications are event driven and non-blocking. They are also responsive,
> resilient, and scalable which async can help with but does not prescribe.
>
> What
>
>
> I've never heard of "imperative model". I'm aware of imperative
> programming. Can you expand on what you mean?
>
>
I meant "mutable data model". Sorry for mixing up terms.
>
> http://blog.codinghorror.com/separating-programming-sheep-from-non-programming-goats/
>
> Hope this helps clarify
On Tue, Mar 18, 2014 at 11:06 AM, Raoul Duke wrote:
> > some sort of FSM. Perhaps concurrency could be modeled using FSMs, but I
> do
> > not believe it is always a simple transition.
>
> http://www.cis.upenn.edu/~stevez/papers/LZ06b.pdf
>
I like FSMs, but they do not compose well.
A.
--
You
So, the following test puzzles me. Not because it takes virtually the same
time (I know that Fork/Join is not cheap and memory is probably the biggest
bottleneck here). But because I do not get why map (as opposed to r/ma)
uses all 8 cores on my MacBookPro. All of them seem to be running
according
On Wed, Mar 19, 2014 at 11:14 AM, Raoul Duke wrote:
> > I like FSMs, but they do not compose well.
>
> some have argued for generative grammars that generate the fsm,
> because it is generally easier to compose grammars, and then generate
> the final fsm. iiuc.
>
I thought about it too but compo
Hi,
So this is a follow-up. I claimed that 1 CPU core can saturate the memory
but it turns out I was wrong, at least to some extend. Driven by curiosity
I decided to do some measurements and test my somewhat older MBP 2.2GHz
Inter Core i7. While it obviously all depends on the hardware, I thought
This is a slightly different result as this time I measure elapsed time
(see appendix and excuse not so nice code) as opposed to a clock time.
Results are similar (unless you have more processes than cores). I am
planning to release the code to github soon.
+--++---+
| # of
Memory access patterns make a huge a difference to memory throughput. I've
> explored this in some detail in the following blog.
>
>
> http://mechanical-sympathy.blogspot.co.uk/2012/08/memory-access-patterns-are-important.html
>
>
Thanks for sharing. From the Clojure perspective and using reducers,
On Wed, Jan 22, 2014 at 5:29 PM, John Chijioke wrote:
> Not true. More RAM, more power.
>
Why?
--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are mo
36 matches
Mail list logo