summary:
Google discourages Python for new projects?
[they encourage go.lang;
Python is said to need more resources?
I'm sure the problem is backwards compatibility .
. in the paper where they talked about Unladen Swallow
they mentioned wanting to remove the GIL
(the global interpreter lock for thread safety)
and this really surprised me coming from google staff,
because Python's project lead (another google staff)
has argued convincingly
that due to the language itself, [11.17:
ie, by being designed for interfacing C
(Jython and IronPython have no GIL)]
removing the GIL from CPython was impractical .
[11.17: at PyCon 2012, he adds:
Threading is for parallel IO.. Python has that old-style OOP,
Multiprocessing is for parallel computation.
The GIL does not hinder any of that.
... just because process creation in Windows
used to be slow as a dog, ...]
which doesn't do much for encapsulation,
and therefore doesn't do much for thread safety
except to single-thread the interpreter .
. if you want something much like python
that also has good concurrency,
then google has given it to us in go.lang;
but, perhaps what he meant to say
is that it's like Intel's CISC architecture:
that was fundamentally slower than RISC;
but they virtualized it:
the machine instructions are converted
to RISC microcode .
. that's what big money could do for Python:
automatically find the inherent concurrency
and translate it to a threaded virtual machine .[11.17:
... but still interface flawlessly with
all C code on all platforms?
I'm no longer confident about that .]
[11.17: web:
. to overcome GIL limitation,
the parallel python SMP module;
runs python code in parallel
on both multicores and multiprocessors .
Doug Hellmann 2007 reviews it:
. you need install your code only once:. see wiki.python.org/moin/Concurrency/
the code and data are both auto'distributed
from the central server to the worker nodes .
Jobs are started asynchronously,
and run in parallel on an available node.
The callable object that is
returned when the job is submitted
blocks until the response is ready,
so response sets can be computed
asynchronously, then merged synchronously.
Load distribution is transparent,
making it excellent for clustered environments.
Whereas Parallel Python is designed around
a “push” style distribution model,
the Processing package is set up to
create producer/consumer-style systems
where worker processes pull jobs from a queue .
Since the Processing package is almost a
drop-in replacement for the
standard library’s threading module,
many of your existing multi-threaded applications
can be converted to use processes
simply by changing a few import statements .
for the latest on Pythonic concurrency .]
8.22: news:
2009 nov 6 18:49, ... Nov 9 Collin Winter @google.com:
Well, simple common sense is going to
limit Python's applicability when operating at Google's scale:
it's not as fast as Java or C++, threading sucks,
memory usage is higher, etc.
One of the design constraints we face when
designing any new system
is, "what happens when the load goes up by 10x or 100x?
What happens if the whole planet
thinks your new service is awesome?"
Any technology that makes satisfying that constraint harder
-- and I think Python falls into this category --
*should* be discouraged if it doesn't have a
very strong case made in its favor on other merits.
You have to balance Python's strengths with its weaknesses:
your engineers may be more productive using Python,
but if they have to work around more
platform-level performance/scaling limitations
as volume increases, do you come out ahead? etc.
[. why Python in the first place? : ]
In a language like C or C++,
the more developers you add,
the more fragile your binary becomes:
it only takes one segfault to kill a running binary
(and hence lose those pending requests),
and the probability of introducing that segfault
goes up with the number of
developers/subsystems/integration points/etc.
Dynamic languages, on the other hand,
are much easier to sandbox in this regard.
If you want to isolate failures in one particular
component in your Python system,
you can just throw a try/except around it
and you're basically good to go.
Unladen Swallow aims to shift the
balancing points in that tradeoff
to make it possible to use Python in more places
where it would currently be unsuitable,
but it's not going to be a panacea.
Python will still be slower than C and Java,
still use more memory and have inferior threading
until someone decides to invest resources into Python
comparable to what, say, Sun has invested in their JVM.
I hope a focus on Python performance by the developers
will start a snowballing effect:
more companies are interested,
more resources can be devoted,
more grad students will work on Python
(and actually commit their work), etc.
We've come up with some optimizations already
that would simply be too difficult to implement in CPython,
and so we had to discard them.
Being a volunteer-run open-source project,
CPython requires somewhat different priorities than V8:
CPython places a heavy emphasis on simplicity,
the idea being that a simple, slower core
will be easier for people to maintain in their free time
than a more complicated, faster core.
I have high hopes for one of the other
Python implementations
to provide a longer-term performance solution
designed without the shackles
of C-level backwards compatibility.
We're thinking
"what can we accomplish in the short term
to make things better *now*?",
while a project like PyPy
is approaching performance from the perspective of
"what can we accomplish if we spend a decade
really getting it right?"
No comments:
Post a Comment