As before, I don't follow all of this. Computer science uses math, but nothing 
like this, and I didn't need anything past calculus 1. Still, I'd like  to see 
the API, and (most importantly) to know if I can use all this in an iOS or Mac 
app written in Swift or Objective-C. If everything is condensed to a C or C++ 
library, I don't see why such integration couldn't happen, but I have no idea 
if the languages you're using would be compatible. I realize you are focused on 
web-based apps at the moment, so we might be on two different wavelengths here.
> On Jan 11, 2015, at 8:52 AM, Yuma Antoine Decaux <[email protected]> wrote:
> 
> Hi Alex,
> 
> I have the Js web audio API classes ready but reading the openal.audio python 
> module, I think I can save a lot of processes and do everything in one 
> language (save xml and lua for the world of warcraft interface)
> 
> http://pythonhosted.org/PyAL/audio.html#module-openal.audio 
> <http://pythonhosted.org/PyAL/audio.html#module-openal.audio>
> 
> 
> To answer your question about the buffer channels.
> 
> The buffer max is 16 megs. a lot of sounds can be shrunk, blended and 
> refactored using fourrier's transforms. Also, by applying buffer queuing 
> algorithms such that active sound sources and their positional information 
> can be truncated into the right bytesize, considering that most or all of our 
> computers are intel x86, and a lot of us using 64-bit. with 16 buffer 
> channels, we have approximately 256 megs of sound clips and generated 
> waveforms (on the fly) that can be queued using parallel algorithms. I was 
> thinking of using the select() module for this purpose which listens and 
> automatically fills the queue which can then be passed to each individual 
> buffers.
> 
> By quick calculation, this is how I see it:
> each observer (character in the game) has three areas (long, med, short). 
> Anything long range dithers anyway in the perceptive field, so they can be 
> blended through the queue and played back as a single long range pass, or pre 
> recorded. Making a simulation first then recording can also work. Mid range 
> has more definition but channel size restricted to 5 sources. The rest of the 
> 10 channels can be various sources around the proximity of the player. I can 
> even hypothesise a cheat which filters the types of sounds we want to hear.
> 
> In regards to emulating higher channel counts, I think it will have to be 
> again math based. Say you have a willow tree in front of you. there are about 
> 35 odd branches, each with smaller branches and their leaves. clumps of 
> leaves with small rustle signatures (this is just about function generation 
> into the buffer) can be blended before being sent to the buffer. Kind of a 
> premix before getting out there in the world. Again, bijectivity is super 
> important to trace back and edit the raw as it comes. using the select module 
> allows for automatic buffer dispatch for the first available one, since each 
> buffer block say is the raw data and its positional/volume/others 
> information. 
> 
> I don't think this will be much of a problem though it shows a technical 
> restriction.
> 
> 
> 
> 
> Yuma Antoine Decaux
> "Light has no value without darkness"
> Mob: +612102277190
> Skype: Shainobi1
> twitter: http://www.twitter.com/triple7 <http://www.twitter.com/triple7>
> 
> 
> 
> 
>> On 11/01/2015, at 2:37 pm, Alex Hall <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> I won't pretend to understand all of this. My degree is computer science, 
>> not higher mathematics or engineering. Still, I'm intrigued, and would love 
>> to hear a practical example. To keep things on topic, would this library be 
>> usable from a Swift or Objective-C app for iOS or OS X? If so, can you give 
>> a real-world example of how? I understand representing things as sounds, but 
>> how would it handle in a real app? That is, what about loading/managing 
>> sound buffers (you can only have 16 at a time in OpenAL), handling stereo 
>> sound samples, generating sounds on the fly instead of relying on recorded 
>> audio, applying real-time filters or effects, managing occlusions and 
>> distance roll-offs, that kind of thing? Is there a mapping engine, where the 
>> programmer can lay out the "world" in some kind of XML or JSON format? Have 
>> I missed the point entirely?
>>> On Jan 10, 2015, at 10:31 PM, Yuma Antoine Decaux <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> I’ll get into more detail on the 3D sound part.
>>> 
>>> It uses a node system, as mentioned earlier, to plug, unplug, blend or 
>>> ratio fit one or more nodes t=which can be filters, user set paremeters or 
>>> daisy chained hierarchies of sound buffers. So imagine you call a tree 
>>> instance from my library. It uses phi and pi to generate the fractal links 
>>> down to the leaf node. Each leaf node has physical properties which follow 
>>> parent nodes with a coefficient, or a scalar value spread along the entire 
>>> tree. Each node is a sound buffer or a set of sound buffers. Collision 
>>> detection is made via matrix identification and eigan matrices. Now set a 
>>> wind particle (full of bounding boxes) object that traverses the tree. Each 
>>> collision triggers the sound of a rustle. in real 3D position relative to 
>>> the user’s position.
>>> 
>>> Now take these tree structures and use a spherical shape (revolving the 
>>> nGon I mentioned earlier around its y axis) and pass it through a deformer 
>>> (which changes scalar values of the vectors within the sphere). This 
>>> deformer can use a set of physics class objects such as inertia, parabolic 
>>> deviations, swirls, you name the geometric shape, there’s a math formulae 
>>> for it. Consider that each vector or vertex is a bird in a school of birds. 
>>> Apply an index to it, and use this other swarm algorithm I studied to 
>>> create an array of bees, birds, fish, whatever. each, when colliding with 
>>> each other will have a behavior generator using again, scalar values. I 
>>> can’t stress enough the utility of matrices and transformations for things 
>>> that go beyond just shapes.
>>> 
>>> So I’ve gone way past my initial goal, and think this can be very useful.
>>> 
>>> I want some help with some of the scripts, to complete them. I’m fine 
>>> paying for it, but the person needs to not only like the idea, but actually 
>>> believe in it.
>>> 
>>> Anyway, here’s my two cents 
>>> 
>>> 
>>> 
>>> Yuma Antoine Decaux
>>> "Light has no value without darkness"
>>> Mob: +612102277190
>>> Skype: Shainobi1
>>> twitter: http://www.twitter.com/triple7 <http://www.twitter.com/triple7>
>>> 
>>> 
>>> 
>>> 
>>>> On 10/01/2015, at 11:18 pm, Alex Hall <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>> Can you explain a bit more what this library is doing and how it might be 
>>>> used? When you said 3d sound, I at first thought you meant something to 
>>>> supplement or replace OpenAL, but that's clearly not the case. I'm not 
>>>> clear on just what this does. Thanks.
>>>>> On Jan 10, 2015, at 2:34 AM, Yuma Antoine Decaux <[email protected] 
>>>>> <mailto:[email protected]>> wrote:
>>>>> 
>>>>> Hi All,
>>>>> 
>>>>> I am currently working on a 3D sound engine. I have so far done the 
>>>>> following:
>>>>> 1-nodes structure for extracting tag and LUA function calls and creating 
>>>>> a hierarchy of each node where parent node is UI.
>>>>> 2-A 3D sound library connecting to the js web sound API, using the node 
>>>>> system
>>>>> 3-a parser toolset to create arrays of configurations between scripts and 
>>>>> languages
>>>>> 4-A geometric 3D volume matrix with the node hierarchy class used as 
>>>>> secondary process
>>>>> 5-using a parallell processing class to send socket information between 
>>>>> nodes
>>>>> 6-A socket distribution (select()) daisy chain communication layer
>>>>> 7-A 3D prototype of an SSD based sound processing CPU that stocks all the 
>>>>> information in the SSD as static memory. I have been 3D prototyping for 
>>>>> about 15 years. I demand elegance and functionality in design, as much as 
>>>>> efficient memory management of blocks and sectors. I am a programmer.
>>>>> 
>>>>> All the scripts are doing exactly what they are supposed to except for 
>>>>> the 3D matrix layer, which I am currently working on. However I have done 
>>>>> all primitives, transforms and rotations using matrices. About to get 
>>>>> back to completing the nGon class.
>>>>> 
>>>>> This project started as a spark when I saw a tweet about a blind player 
>>>>> on World of Warcraft.
>>>>> 
>>>>> Now it has turned out to be much bigger.
>>>>> 
>>>>> Everything is written in standard APIs such as python and JS modules. I 
>>>>> am trying to complete this accessible World of Warcraft layer which I 
>>>>> will use as a GNU license platform which does not use world of warcraft. 
>>>>> I don’t understand why blizzard hasn’t done this. But this has given me 
>>>>> the opportunity to see exactly what is happening in the system 
>>>>> architecture. And be an architect, though I had lost that capacity once I 
>>>>> lost vision.
>>>>> 
>>>>> Will anyone be so cool as to send me a reply with “#vipWOW” as subject?
>>>>> 
>>>>> I really hope that this ideal I have been carrying on for the past 6 
>>>>> years, dedicated to programming and mathematics where I used not do apply 
>>>>> so frequently can be growing to a larger community through the effort I, 
>>>>> and hope others, will accept as an independant hire, to help. I cannot 
>>>>> afford thousands per month, but I have laid down the architecture, the 
>>>>> working sub systems, and working through each all the way to the main 
>>>>> class.
>>>>> 
>>>>> This effort, I have come to realise, demands way more hands than my blind 
>>>>> vision on the computer can handle, though I handle VIM quite well and 
>>>>> efficiently. But it also needs to be accessible to the level I want it at 
>>>>> some point.
>>>>> 
>>>>> If you are ready to experience something seriously cool (network 
>>>>> connectivity, private test server, wiki, calendars and contacts, vnc 
>>>>> access, ssh, ftp, redundancy is not there yet but we’re working on an 
>>>>> arch linux installation), with an extra dimension (tactile), please do 
>>>>> contact me. Let’s make an order of classes that will standardise many 
>>>>> aspects of our experience on the computer as blind coders, and be the 
>>>>> programmers for programmers in facilitating our own experience. 
>>>>> 
>>>>> Sincerely,
>>>>> 
>>>>> Antoine Decaux
>>>>> twitter: triple7
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> You received this message because you are subscribed to the Google Groups 
>>>>> "MacVisionaries" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>>> email to [email protected] 
>>>>> <mailto:[email protected]>.
>>>>> To post to this group, send email to [email protected] 
>>>>> <mailto:[email protected]>.
>>>>> Visit this group at http://groups.google.com/group/macvisionaries 
>>>>> <http://groups.google.com/group/macvisionaries>.
>>>>> For more options, visit https://groups.google.com/d/optout 
>>>>> <https://groups.google.com/d/optout>.
>>>> 
>>>> 
>>>> --
>>>> Have a great day,
>>>> Alex Hall
>>>> [email protected] <mailto:[email protected]>
>>>> 
>>>> -- 
>>>> You received this message because you are subscribed to the Google Groups 
>>>> "MacVisionaries" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>> email to [email protected] 
>>>> <mailto:[email protected]>.
>>>> To post to this group, send email to [email protected] 
>>>> <mailto:[email protected]>.
>>>> Visit this group at http://groups.google.com/group/macvisionaries 
>>>> <http://groups.google.com/group/macvisionaries>.
>>>> For more options, visit https://groups.google.com/d/optout 
>>>> <https://groups.google.com/d/optout>.
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "MacVisionaries" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to [email protected] 
>>> <mailto:[email protected]>.
>>> To post to this group, send email to [email protected] 
>>> <mailto:[email protected]>.
>>> Visit this group at http://groups.google.com/group/macvisionaries 
>>> <http://groups.google.com/group/macvisionaries>.
>>> For more options, visit https://groups.google.com/d/optout 
>>> <https://groups.google.com/d/optout>.
>> 
>> 
>> --
>> Have a great day,
>> Alex Hall
>> [email protected] <mailto:[email protected]>
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "MacVisionaries" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] 
>> <mailto:[email protected]>.
>> To post to this group, send email to [email protected] 
>> <mailto:[email protected]>.
>> Visit this group at http://groups.google.com/group/macvisionaries 
>> <http://groups.google.com/group/macvisionaries>.
>> For more options, visit https://groups.google.com/d/optout 
>> <https://groups.google.com/d/optout>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "MacVisionaries" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To post to this group, send email to [email protected] 
> <mailto:[email protected]>.
> Visit this group at http://groups.google.com/group/macvisionaries 
> <http://groups.google.com/group/macvisionaries>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.


--
Have a great day,
Alex Hall
[email protected]

-- 
You received this message because you are subscribed to the Google Groups 
"MacVisionaries" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/macvisionaries.
For more options, visit https://groups.google.com/d/optout.

Reply via email to