drmatt wrote: > But really, streaming audio is trivially easy. The difference between a > gen 1 i5 aka nehalem and a g4 i5 aka haswell in this context is > essentially zero. A pi B+ will do this job. Hell, even a SBT processor > or an embedded nas processor is capable. > > The focus is on windows thread locking semantics i think, so multi core > is suspect. And i5s don't support hyperthreads, so these are all real > independent cores.
The problem happens on Windows The problem only happens when live HLS/DASH AAC has to be transcoded (i.e. works OK on Touch/ Radio and Squeezelite but not Classic, Boom or Squeezeplay/Joggler) The problem only happens with live chunked http - Listen Again is OK, normal http/aac (e.g. somafm) transcoded is OK . A similar problem (is it the same ?) can be "cured" on slower (single / dual core) Windows systems by "live delay" to enable more data to be prefetched . Happens with both UK (3220kbps) and non UK users (96kbps) AFAICT There are three issues to consider: 1. Windows requiring socketwrapper compared to simpler Linux setup and better pipe handling. 2. Multicore & faster processors - problem has only been reported by users with newish processors with 4 cores 3. live chunked http - problem on both HLS and DASH so not a specific protocol issue. I think the issue is timing. In the past LMS setup socketwrapper which in turn setsup threads and processes and interconnecting pipes (linux 2 processes & 1 pipe - Windows two processes, 3 threads and 3 pipes) . With single core these happened one after the other. In the mean time LMS has issued a "Normal" http a GET and data is streamingin non stop in one TCP connection being put into memory by the Ethernet controller/interrupt handler. So by the time all processes and thread s are ready to go - there is loads of data to send to the waiting transcoding processes. With multicore - these setup happened in parallel but still data was arriving in the background so even thought setup was quicker - data was available once all processes are ready. Chunked HTTP means each GET returns 6 secs of audio. With HTTP/1.1 a single TCP connection is used by each GET response time is about 100-120ms. So to request 30 secs of audio will require at least 500-600ms with additional processing etc. . With a live chunked stream - data is only available in realtime. With Listen Again it is possible to request via mulitple HTTP/GET (each for 6 secs) minutes of data by sending multiple GET quickly. With live audio audio once at leading edge can only get 6 secs of audio every 6 secs so if you start at or near live edge no chance of a buffer without long delay at start. The "Live delay" solution was essentially fetching the audio from the previous X mins to enable a buffer to be built up. This solution does not seem to work on the mutlitcore processors - why ? is it a different problem ? ------------------------------------------------------------------------ bpa's Profile: http://forums.slimdevices.com/member.php?userid=1806 View this thread: http://forums.slimdevices.com/showthread.php?t=104672 _______________________________________________ plugins mailing list [email protected] http://lists.slimdevices.com/mailman/listinfo/plugins
