Why Python over any other high level language?

This is what I mean by hating the structure.

A heart on a license plate? Is that a thing in CA???

Rust has i8, i16, i32, i64, i128, u8, u16, u32, u64, and u128. Go has uint8, uint16, uint32, uint64, int8, int16, int32, and int64. They’re certainly still around in languages where it is expected to matter.

Maybe I’m hanging around with a different crowd these days, but it seemed like in 2005 there were a lot of people excited by python’s not having to specify types (see xkcd: Python) which was a barrier to entry for C/Java learning. But in 2015 there were a lot of people excited by Typescript/Dart/&c which added back the type system to dynamic languages for better maintainability. I.e. they were excited to get back what the 2005 peeps were excited to ditch.

Hearts, hands, stars, I think a plus sign but I don’t remember seeing one

On a particular flavor of personalized plate (the “Kids Plate”), you can choose one of four special characters to appear in the number. (Heart, Star, Handprint, or Plus).

DMV link

The linked page doesn’t specifically say, but I believe I read that the special character is ignored for the purpose of uniqueness (meaning the pictured vehicle has a license number of “PTHON” for official purposes).

(Although upon closer inspection, the plate in the photo doesn’t look like a Kids Plate. Hmmmm…)

Correct

Yes, and CA has four possible symbols: heart, hand, star, or plus sign.

(ETA — already answered above)

The horror! The horror! The terrible memories…

I used to joke that C# amd Java used to steal each cool new features from each other, lambdas, generics etc.

But I currently code in Kotlin which has stolen all the cool shit from C#, Ruby, Python, and what it could get from Java (snerk). It is a great modern language, though I’m not so sure about the JVM

Hard on for you? I thought they’d reject that. Funny enough they state they only allow 69 if you drive a 1969 vehicle

VB.NET or VB6? Some masochists loooove the latter. I’ve done .NET on and off since the beginning and feel like I have to use embarrassed hushed tones, but it’s not that bad or anything.

Ruby is Python for weebs

It’s not that bad, in fact it’s great. The whole .NET system w/ Visual Studio and nuget is the best implementation of the respective functions there is. It is even cross-platform now which probably never was an actual complaint but a brand people could rally around to pick nits. In fact, I feel like the ones steering the ship on it are following the wrong path into something more complicated than it needs to be; the weird shit VS is pulling with the program.cs without namespace or classes is depressing. Who wanted that?

Yeah. VB.NET w Visual Studio. And a ton of Javascript in there too. Still like it better than Python.

The issue is that the execution time may be 100x longer. Multiple times now I’ve found myself rewriting a script (mostly in Perl, not Python, but same diff) simply because it would take hours to execute the script version on the dataset I care about.

And personally, I find that the productivity advantage of scripting languages is only there at a small scale. Past a certain point, the advantages of types and other compile-time constraints, not to mention the superiority of IDEs and debugging tools, tilts the advantage back in favor of compiled languages (C++ and C# being my preferred ones).

Even for small projects, I’ve sometimes found myself writing something in C++ simply because I know I can be algorithmically lazy in C++ and get away with it, whereas in a scripting language I might have to try really hard to come up with a technique that fits in the performance and memory constraints.

Python is slow, but many apps use Python as a wrapper around native code. You end up with a native app that uses scripting as the top layer.

In machine learning and scientific computing Python is moving from a fancy configuration language to a DSL. They Python code that you write doesn’t do the work, it configures a pipeline that executes on GPU to do the work. Recent versions of TensorFlow inspect the Python source itself to automatically convert to pipeline code.

This type of code is well suited for Python because the line between compile time and run time is fuzzier. If your program is going to run the same loop 100,000 times, running the first loop is usually enough to catch the bugs.

When you run code in SciPy (or Matlab or whatever; there is also Julia which is supposed to have certain advantages in terms of compilation), the script may run 100x more slowly than native code, but the actual library (e.g., LAPACK) will be written in Fortran (or C++ or whatever), so the solution has always been to let the optimized library do the work.

I’m aware of that. Python is excellent as glue code. It’s possible that the projects I’ve done could be “rephrased” in Numpy or some such, using optimized C/C++ or even GPU code. Still–if I’m just trying to bang something out quickly, I can usually write it directly in code more quickly than I can figure out how to translate it to a matrix multiply or something, or search the documentation for the function that does what I need.

I wasn’t aware that TensorFlow could now compile Python directly into GPU code. I’m curious how well it works, though. GPU code in particular is very sensitive to things like load/store coherency, and serious work requires detailed knowledge of things like shared memory, atomics, etc. But maybe the automatic stuff is good enough for a usable speedup over CPU code.

Yes: using “base” Python type code might be very slow. But if you use the power of Numpy, you can run things much faster, and this is without even thinking of anything as much as GPU use. Generally in Numpy, Matlab, R, probably Julia, you can probably write most code by never personally writing loops, exceptions being things like if you don’t know how many runs to repeat.

I just ran two quick Monte Carlo type simulations in base Python code and in Numpy type code: create a 1000 x 1000 array of random numbers, check timing of multiple attempts using %timeit

1000 repetitions, Python code:
413 ms ± 3.47 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1000 repetitions, Numpy code:
3.9 ms ± 47.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

10,000 repetitions, Python code:
42 s ± 87 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
10,000 repetitions, Numpy code:
379 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

So code runs about 100x faster if you do some very basic optimization

I’ve never worked on production Python that wasn’t compiled into .pyc for actual use.