On 9 Dec 2013, at 14:51, Igor Elland wrote:
> Are you taking into account that 见,≠, and 見 are composed character sequences,
> not individual unichars?
>
This method:
- (void)printString: (NSString *)line
{
NSLog(@"%s \"%@\" has characters:",__FUNCTION__, line);
On Dec 8, 2013, at 23:46 , Gerriet M. Denkmann wrote:
> NSString *b = @"见≠見"; // 0x89c1 0x2260 0x898b
So what are the results with:
> NSString *b = @"见”;
> NSString *b = @"≠”;
> NSString *b = @"見”;
?
And what’s the current locale? Does specifying an explicit locale make any
di
On 9 Dec 2013, at 15:05, Quincey Morris
wrote:
> On Dec 8, 2013, at 23:46 , Gerriet M. Denkmann wrote:
>
>> NSString *b = @"见≠見";// 0x89c1 0x2260 0x898b
>
> So what are the results with:
>
>> NSString *b = @"见”;
>> NSString *b = @"≠”;
>> NSString *b = @"見”;
> ?
>
> A
On 9 Dec 2013, at 15:05, Quincey Morris
wrote:
> On Dec 8, 2013, at 23:46 , Gerriet M. Denkmann wrote:
>
>> NSString *b = @"见≠見";// 0x89c1 0x2260 0x898b
>
> So what are the results with:
>
>> NSString *b = @"见”;
>> NSString *b = @"≠”;
>> NSString *b = @"見”;
> ?
>
> A
On 9 Dec 2013, at 15:05, Quincey Morris
wrote:
> On Dec 8, 2013, at 23:46 , Gerriet M. Denkmann wrote:
>
>> NSString *b = @"见≠見";// 0x89c1 0x2260 0x898b
>
> So what are the results with:
>
>> NSString *b = @"见”;
>> NSString *b = @"≠”;
>> NSString *b = @"見”;
> ?
>
>
On Dec 9, 2013, at 00:22 , Gerriet M. Denkmann wrote:
> but I have great difficulties imagining a place on this world where = is the
> same as ≠.
> NSDiacriticInsensitiveSearch → "见≠見" (3 shorts) occurs in "见=見見" (4
> shorts) at {0, 3}
The latter suggests that the bar across the equals
I don't get the same result. 10.9.0, Xcode 5.0.2. I created an empty
command line utility, copied the code, and I get NSNotFound.
2013-12-09 02:50:19.822 Test[73850:303] main "见≠見" (3 shorts) occurs in
"见=見見" (4 shorts) at {9223372036854775807, 0}
On Mon, Dec 9, 2013 at 2:43 AM, Gerriet M. Denk
On 9 Dec 2013, at 16:00, Stephen J. Butler wrote:
> I don't get the same result. 10.9.0, Xcode 5.0.2. I created an empty command
> line utility, copied the code, and I get NSNotFound.
>
> 2013-12-09 02:50:19.822 Test[73850:303] main "见≠見" (3 shorts) occurs in
> "见=見見" (4 shorts) at {922337203
On Dec 9, 2013, at 01:00 , Stephen J. Butler wrote:
> I don't get the same result. 10.9.0, Xcode 5.0.2. I created an empty
> command line utility, copied the code, and I get NSNotFound.
>
> 2013-12-09 02:50:19.822 Test[73850:303] main "见≠見" (3 shorts) occurs in
> "见=見見" (4 shorts) at {9223372036
OK, you are right. Copy+paste didn't preserve the compatibility character.
Does look like a bug of sorts, or at least something a unicode expert
should explain.
On Mon, Dec 9, 2013 at 3:20 AM, Gerriet M. Denkmann wrote:
>
> On 9 Dec 2013, at 16:00, Stephen J. Butler
> wrote:
>
> > I don't get t
Would converting each string to NFD (decomposedStringWithCanonicalMapping)
be an acceptable work around in this case?
On Mon, Dec 9, 2013 at 3:43 AM, Stephen J. Butler
wrote:
> OK, you are right. Copy+paste didn't preserve the compatibility character.
> Does look like a bug of sorts, or at least
Hi All,
I am taking video on in my iPhone app at 1280x720 this turns out at about
40Mb/min What I want is to drop the bit rate not the size using
AVAssetWriter/AVAssetReader, is this possible or even the right way of doing
this?
I am able to get what I want by dropping the bit rate in an exter
On 9 Dec 2013, at 16:53, Stephen J. Butler wrote:
> Would converting each string to NFD (decomposedStringWithCanonicalMapping) be
> an acceptable work around in this case?
No, it would not. I am changing all my rangeOfString calls to use
NSLiteralSearch, which does not have these strange effec
On 9 Dec 2013, at 10:38, Gerriet M. Denkmann wrote:
>
> On 9 Dec 2013, at 16:53, Stephen J. Butler wrote:
>
>> Would converting each string to NFD (decomposedStringWithCanonicalMapping)
>> be an acceptable work around in this case?
> No, it would not. I am changing all my rangeOfString calls
On 6 Dec 2013, at 5:46 pm, Graham Cox wrote:
> OK, I’ve now tried this approach, and it’s much cleaner in that it works with
> scrollers, with and without “responsive” scrolling (which appears to buffer
> its contents) and also zooming. Code follows. In this case, drawing overall
> is slower
On 9 Dec 2013, at 15:47, Graham Cox wrote:
>
> On 6 Dec 2013, at 5:46 pm, Graham Cox wrote:
>
>> OK, I’ve now tried this approach, and it’s much cleaner in that it works
>> with scrollers, with and without “responsive” scrolling (which appears to
>> buffer its contents) and also zooming. Co
On 9 Dec 2013, at 5:01 pm, Mike Abdullah wrote:
> Maybe a dumb question: How about using CATiledLayer?
Well, I haven’t explored it very much, and certainly not in this context, but
it seems to me that it’s solving a different problem. It sounds similar, but
it’s not actually useful for buffe
I've been impressed with the thought you've put into this.
Probably a question you don't want at this point, because by now your looking
for closure, but did you try different blend modes when calling DrawImage,
specifically the copy blend mode. I'm wondering if that might be faster as
hopefull
On Dec 9, 2013, at 7:47 AM, Graham Cox wrote:
> This last step is where it all falls down, because this one call, to
> CGContextDrawImage, takes a whopping 67% of the overall time for drawRect: to
> run, and normal drawing doesn’t need this call (this is testing in a ‘light’
> view, but never
On Dec 9, 2013, at 8:36 AM, Jens Alfke wrote:
>
> On Dec 9, 2013, at 7:47 AM, Graham Cox wrote:
>
>> This last step is where it all falls down, because this one call, to
>> CGContextDrawImage, takes a whopping 67% of the overall time for drawRect:
>> to run, and normal drawing doesn’t need
On 9 Dec 2013, at 5:36 pm, Jens Alfke wrote:
> So if you can avoid it, you shouldn’t be doing your own rendering into
> images. I haven’t been following the details of this thread, but my guess is
> you’ll get better performance by drawing the tiles directly to the view, just
> setting a clip
On 9 Dec 2013, at 5:17 pm, Kevin Meaney wrote:
> Probably a question you don't want at this point, because by now your looking
> for closure, but did you try different blend modes when calling DrawImage,
> specifically the copy blend mode. I'm wondering if that might be faster as
> hopefully
On 9 Dec 2013, at 5:45 pm, David Duncan wrote:
> If you have a buffer to draw into, then you can easily slice that buffer to
> use between multiple graphics contexts, but you will fundamentally have to
> draw them all into the source context at the end.
I wasn’t able to figure out how to do
On Dec 9, 2013, at 9:50 AM, Graham Cox wrote:
> By “slice the buffer”, I assume you mean set up a context on some region of
> that buffer, but when I tried to do that, CGBitmapContextCreate[WithData]
> would not accept my bytesPerRow value because it was inconsistent with the
> ‘width’ value,
On Dec 9, 2013, at 8:45 AM, David Duncan wrote:
> One major impediment to this is that you cannot use the same graphics context
> between multiple threads, and as such using the graphics context that AppKit
> gives you forces you into a single threaded model.
Ah, interesting.
What’s the slow
> On Dec 9, 2013, at 8:50 AM, Graham Cox wrote:
>
>
>> On 9 Dec 2013, at 5:45 pm, David Duncan wrote:
>>
>> If you have a buffer to draw into, then you can easily slice that buffer to
>> use between multiple graphics contexts, but you will fundamentally have to
>> draw them all into the sou
On Dec 9, 2013, at 9:23 AM, Kyle Sluder wrote:
>> On Dec 9, 2013, at 8:50 AM, Graham Cox wrote:
>
>>
>>
>>> On 9 Dec 2013, at 5:45 pm, David Duncan wrote:
>>>
>>> If you have a buffer to draw into, then you can easily slice that buffer to
>>> use between multiple graphics contexts, but yo
> I think I’ve explored this as far as I can go. Here’s my wrap-up, for what
> it’s worth to anyone. Not a lot, I expect.
>
> The conclusion is, I don’t think it can be done with the current graphics
> APIs with any worthwhile performance. Here’s my summary of why that is…
> … This last step
On 9 Dec 2013, at 7:03 pm, Seth Willits wrote:
> If all the drawRect is doing is making a single call to CGContextDrawImage
> then it should rightly be 100% of the time, so that measure isn’t interesting
> on its own. :)
Yes, that’s true. It’s hard to be totally objective, because running
I
>> The single CGContextDrawImage in drawRect: should end up essentially being a
>> memcpy which will be ridiculously fast
The bottleneck for image blitting is copying the pixels from CPU RAM to GPU
texture RAM. This is often a bottleneck in high-speed image drawing, and I know
that Quartz goes
On Mon, Dec 9, 2013, at 02:04 PM, Jens Alfke wrote:
>
> >> The single CGContextDrawImage in drawRect: should end up essentially being
> >> a memcpy which will be ridiculously fast
>
> The bottleneck for image blitting is copying the pixels from CPU RAM to
> GPU texture RAM. This is often a bottl
I have a program that solves problems that are very computationally intense. I
divide up the work and create an NSOperation for each segment. Then I put the
operations in NSOperationQueue, and start the queue. Expecting the job to take
three or four hours, I go to dinner.
When I return, and
On Dec 9, 2013, at 3:17 PM, Jim Elliott wrote:
> I have a program that solves problems that are very computationally intense.
> I divide up the work and create an NSOperation for each segment. Then I put
> the operations in NSOperationQueue, and start the queue. Expecting the job
> to take t
There are also APIs to disable the new “app nap” power-saving feature in OS X
10.9 — look at NSProcessInfo.
—Jens
___
Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)
Please do not post admin requests or moderator comments to the list.
Contact the m
>> The single CGContextDrawImage in drawRect: should end up essentially being a
>> memcpy which will be ridiculously fast, as long as your contexts/backing all
>> use the same color space and bitmap layout as the view’s context’s backing.
>> Definitely make sure they’re using the same color spac
On 9 Dec 2013, at 23:32, Jens Alfke wrote:
> There are also APIs to disable the new “app nap” power-saving feature in OS X
> 10.9 — look at NSProcessInfo.
My understanding is this is basically a wrapper around the lower level Power
Assertion APIs, which have been extended to give App Nap a li
On Dec 9, 2013, at 2:27 AM, Damien Cooke wrote:
> I am taking video on in my iPhone app at 1280x720 this turns out at about
> 40Mb/min What I want is to drop the bit rate not the size using
> AVAssetWriter/AVAssetReader, is this possible or even the right way of doing
> this?
Yes. You can sp
I occasionally am getting the following message when I use the address book api
to add a record to my sharedAddressBook, which then causes the add to fail:
is being ignored because main executable
(/System/Library/Frameworks/AddressBook.framework/Resources/AddressBookSync.app/Contents/MacOS/Addr
38 matches
Mail list logo