@freemo Maybe they fix it in the next version

I don't know python that much, but I know when we talk about languages it take time to improve

@nate_river I dont think they have any intent. They missed their chance in the move from python2 to python3 which broke backwards compatibility and hopefully they have no intention of doing that a second time, so kinda damned if they do damned if they dont at this point.

With that said it was very doable they just choose not to do it. Even before they wrote python3 there were unofficial python interpreters that already fixed this problem and were drop-in compatible with the official python. the truth is they were just too lazy to do it.

@freemo are these things u said really things they need to fix?

Languages are created for different purposes.

That's why we use C language in avionics and we don't use PHP in this for example.

I don't know Python well enough to comment but maybe your complaint is on purpose. I know that Python is used in Machine Learning, so maybe this multprocessing plays a key role in it.

Follow

@nate_river I guess if you have no wish for people to develop cpu-limited apps on your language, then no. As you suggest and as @pganssle suggests you can (and perhaps should) write such things in C and just wrap them in python. That is a matter of opinion I suppose.

In my eyes the limitation here is not one that in any way improves python, its not a thing where they made python better in some ways by having the GIL lock issue and took that negative impact as a accepted consequence. So in that regard I do not find it excusable. It could have and should have been fixed with python3 and if they had the language would have been better (easier to write things that use multi cpus) without having any consequences... So to me and in my opinion, yes this is a black mark on the language and if they had it it would not have detracted from the intended purpose of python or made it any less suited at what its good at, it just would have made it more suited to a greater range of problems.

@freemo @nate_river By the way I did not necessarily recommend C. I've given several talks about writing Python extensions in Rust that are relevant here: ganssle.io/talks/#python-backe

For your cases, I would guess Cython would be sufficient, and Cython is very nearly as easy to read and write as Python is.

One of Python's great strengths is that it is super easy to wrap bindings to other languages, so your API can be written in Python (easy to read and write) and your backend can be written in C or Rust or some other more complicated language.

I find there's often some 80/20 rule where 80% of the work is done by 20% of the code. Speeding up that 20% by writing it in Cython is trivial compared to writing the other 80% of the code in C or, to a lesser extent, Rust.

@pganssle @freemo I'm not used to use two languages in the same project.

But maybe it is a good idea depending on the project

@nate_river @freemo It's fairly common with Python and I think other interpreted languages.

Cython in particular is barely a different language.

Consider this fairly-optimized Cython compared to Python: gist.github.com/pganssle/d0dab

The Cython code compiles to C++ and the resulting Python function is something like 10x faster. Hand-crafted Rust or C can be much faster, but isn't always quite as straightforward to embed.

@pganssle @freemo oh wait... I use more than once in a project, but not for the same purpose

I was thinking about it in a different way lmao

@pganssle

Actually I came up with a rather simple solution early on that does work... it costs me a little in performance but nothing too major and keeps the code elegance I would like.. Im mostly just grumpy that it was needed and that i had to take that performance hit, minor as it is.

My solution was to just write an array caching class that wraps any array-like object and cashes the results so calling the same index a second time doesnt hit the underlying object. I needed a sense of caching anyway even without multiprocessing because the whole architecture is basically a stack of wrapped array-like objects each level doing additional operations (for example one layer when wrapped around an array provides a "view" of the array as its moving average, without itself containing any internal variables, it calculates on the fly and pass through). So even without multi processing there was a need to write a type of wrapper that caches at specific layers because the layer above it might re-read several values many times and dont want to re-calculate every time.

All I did was make it so these caches have a drop-in equivalent that uses shared memory arrays rather than plain arrays when in multithreaded mode and now the whole object can pass across multithreading boundaries without needing any special nonsense. It works well enough for now.

@nate_river

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.