Following test program: import time import marshal try: import cPickle as pickle str_ = unicode except: import pickle str_ = str
def TestPrc(rd): d = rd.recv() print("OK") def TestPrc2(rd): d = pickle.loads(rd.recv()) print("OK") if __name__ == "__main__": from multiprocessing import freeze_support, Process, Pipe freeze_support() d = { str_(x) : u"Hello World" + str_(x + 1) for x in range(1000000) } wr, rd = Pipe() p = Process(target = TestPrc, args = ( rd, )) p.start() t1 = time.time() wr.send(d) print("#1", time.time() - t1) p.join() print("#2", time.time() - t1) p = Process(target = TestPrc2, args = ( rd, )) p.start() t1 = time.time() wr.send(pickle.dumps(d, pickle.HIGHEST_PROTOCOL)) print("#3", time.time() - t1) p.join() print("#4", time.time() - t1) I get the following results: Python 2.7: ('#1', 0.33500003814697266) OK ('#2', 0.7890000343322754) ('#3', 0.36300015449523926) OK ('#4', 0.8059999942779541) Python 3.4: #1 0.7770781517028809 OK #2 1.4451451301574707 #3 0.7410738468170166 OK #4 1.3691368103027344 Python 3.6: #1 0.681999921798706 OK #2 1.1500000953674316 #3 0.6549999713897705 OK #4 1.1089999675750732 The results show that in Python 3 it is faster to do the (un-)pickling in the Python code. I would expect to have no real performance difference, or at least the more straight forward way to send Python objects directly to be a bit faster, as it is in Python 2. Some other interesting results from this example: - Python 2 is much faster - At least Python 3.6 is much faster than Python 3.4 Regards, Martin -- https://mail.python.org/mailman/listinfo/python-list