It might be faster, and it will not compete with the rest of Python for the single Python global interpreter lock.
Important point: you usually do *not* develop your software directly on the embedded platform (ie. your pi); you'd normally start on a PC, develop, figure out how fast it runs there, identify the bottlenecks and then move it to the raspberry Pi when you are somewhat confident that it would still work on a slower machine than your PC. However: 100 MS/s is definitely so far from possible to do on a processor like the raspberry Pi's that you don't even have to try, sorry. Also, the Raspberry Pi has no interface fast enough to even transport 100 MS/s, i.e. there's not even a USRP that does 100MS/s that has an interface that would allow you to connect it to the raspberry Pi. You might want to explain what you're trying to build here! Maybe we have some recommendations on how to get you closer to what you need. Best regards, Marcus On 09.07.21 18:33, Huang Wei wrote: > Thank you for the quick reply, so if I write the block in C++ or C, it may > work at a > higher rate? > > Regards, > Wei > > Marcus D. Leech <patchvonbr...@gmail.com <mailto:patchvonbr...@gmail.com>> > 于2021年7月9日周五 下午5:29写道: > > On 07/09/2021 12:05 PM, Huang Wei wrote: >> Sorry, I mean it's the underrun problem >> >> Huang Wei <weizar...@gmail.com <mailto:weizar...@gmail.com>> 于2021年7月9日周五 >> 下午5:02写道: >> >> Hello everyone, >> >> I am using the embedded python block in GRC to realize some simple >> functions. >> All works fine in the GRC local set-up. However, if I connect a USRP >> sink to >> the flowgraph which includes that python block, and set the sample >> rate more >> than 20 MHz (I need 100 MHz in my case), it will keep output >> "UUUUUUUU....", >> and then GRC will stop working. >> I tried the defaulted python block, where the output is simply >> multiplied by a >> factor k, the same overrun problem happened. And GRC works fine with >> the USRP >> sink if the python block is not used. >> >> Does anyone know what could be the problem? >> Thank you! >> >> Regards, >> Wei >> > Because there's absolutely no way a Python-based work function is ever > going to keep > up at multi-MSPS rates. > >