An unconstrained ball joint with what sort of servos actually moving the telescope? The servos can’t be “software-based”.
Reaction control thrusters, clearly…
“3-DOF Outer Rotor Electromagnetic Spherical Actuator”
That is very silly. The exhaust products would pose a contamination hazard for delicate optics. Obviously I would use reactionless thrusters. Now, I just have to figure out how to apply a torque to the entire rest of the universe…
Stranger
Now I’m infinitely thankful that I learned the trade (in the mid 1970s) at a “Technical College” where the emphasis was on application programming and systems analysis. If I’d had to deal with what’s been posted the last couple of days I would have flunked out within a week; as it was, I made it into a 42 year career.
(When I told people what I did, a common response was, “Wow, you must be good at math!” No, I suck at math, though I’m good with actual numbers — math is what the computer is for.)
Yes. Thanks and enlightening, bringing bac traumatic memories. I’m too lazy at this point to dig up my Computational Mathematics textbook from 40-plus years ago, and I obviously threw out my lecture notes … 40-plus years ago.
You learn all this stuff in university and the most you do in real life is take percentages for payrolls.
I disagree. Computers are made for executing algorithms. Some of them may involve math. In 57 years of programming I don’t recall many times I did any math that was remotely interesting, especially not arithmetic. The only one I really remember was for my Numerical Analysis class.
If you count graph theory as math, perhaps a bit more.
Computers wrangle all sorts of things.
For most people they perform arithmetic. Anything from spreadsheets to graphics. But they can implement symbolic manipulation systems, and that gets you to actual mathematics. Mathematica, Maple, Macsyma. And lots of other less well known ones.
Parsers, compilers and their ilk are another class of symbolic manipulation systems. They of course have strongly defined mathematical roots.
The interplay between huge tensor based number crunching and LLMs blurs things somewhat.
I found it very interesting (but obvious in retrospect) that multiplication of two numbers is mathematically equivalent to convolution. And that convolution can be implemented in O(n log n) time instead of the naive O(n^2) using the FFT.
The IBM 704 was based on an adder and some address registers. The latest NVIDIA chip is an adder supported by cache and threading registers. Both allow programmers to solve problems with algorithms that can be implemented by testing and/or storing the result of an add. It appears the possibilities are endless. Still, the hardware component of a digital computer is just an adding machine. Hence the magazine reference to additions.
Technically, all algorithms implemented into a digital system are ‘math’ in the most general of terms. The actual mathematics are obscured by the compiler or interpreter which translates and decomposes higher level logical statements into operating system kernel instructions or hardware-level signals that directly control the processor/memory/device controller. The typical programmer, of course, will do little match beyond basic arithmetic and some trivial discrete mathematics unless they are doing something like developing rendering or complex visualization systems, working in scientific and engineering simulation or control of non-linear systems, implementing economic and fiscal models, or cryptography because most of what programmers do is application development not fundamental computational science.
Multiplication of whole numbers is essentially the most trivial implementation of convolution. If you think of multiplication in therms of set theory that becomes immediately and intuitively obvious. In fact, essentially all operations upon data sets or the interactions between models are some form of convolution by definition.
Stranger
Are you saying that at the most basic level decoding and executing instructions is math? That’s not true, even slightly. My PhD is in computer architecture, specifically microprogramming, so I lived at this level and there is even less math than in typical assembly language. I think one can make an argument that instruction scheduling hardware does math, not arithmetic but scheduling and dependency analysis, but the machines I used back then didn’t have it.
I think language development counts as fundamental computer science, and that doesn’t have a lot of math in it either. A lot of the EDA development I did and managed wasn’t applications development - we hoped someone would use it, but no one had did it before, so it was not the nth version of something - used math in the sense of graph theory, not arithmetic.
Back in the pre-laser printer days, math was important. I know someone who had to run his entire dissertation through and IBM Selectric typewriter tied to the computer twice to get all the subscripts. I deliberately avoided all equations in my dissertation that could not be done in one pass. Fortunately, my stuff didn’t use many.
This is relevant but not directly–-a chart of memory prices from 1957 to relatively recently. Alas, the original source at https://jcmit.net/memoryprice.htm is no longer available–the domain expired in July of this year. His last update to his home page is at Dr John C McCallum and suggests that he’s done.
Ah, but this page is a newer version of the memory list, up to 2024:
Very cool. It’s scary (in the sense that it makes me feel old) that the DEC 8K word x 12-bit memory circa 1973 was in “my” era – that is, I actually worked with that stuff. The machine I actually worked with for a number of years was actually a PDP-12, an unusual machine with a dual instruction set. It was essentially a glorified 12-bit PDP-8 with lots of real-time data acquisition peripherals built in. It also could run the LINC instruction set, LINC (an acronym for Laboratory INstrument Computer) being a minicomputer designed at MIT. But we exclusively used the PDP-12 in PDP-8 mode because of the wealth of software available for that architecture.
In terms of memory, a basic PDP-8 came with 4K 12-bit words of memory. Its memory reference instructions could directly address only 256 words at a time. Addressing other parts of the 4K memory space required indirect addressing, where you referenced another 12-bit word with the “indirect” bit set that contained the address of a word within the 4K address space. The PDP-8 architecture could actually accommodate a maximum of 32K words of physical memory. This was accomplished with memory management instructions with which you could switch to any desired 4K bank at a time.
As memory prices fell, I still remember the excitement when we got the funding to buy a third-party memory system that expanded the PDP-12 to a full 32K! It was still hand-built core memory, though, and IIRC it cost tens of thousands of dollars, Physically it was a rack-mounted box about the size of a modern compact data center server.
Ah, the good old days! Today kids writing software think nothing of wasting a few hundred megabytes just for convenience!
Indeed. I grew up on IBM mainframes, 24-bit with 4-byte words. We used to do a LOT of using that top byte as a flag byte to save that expensive memory!
First machines I worked on were a 360/75 and a /44. I forget what the /44 (the APL machine!) had, but the /75 had 3MB: one of core and two of solid state. I have a couple of core frames around here somewhere, just for fun.
I suspect you’ve had the same problem I have, of referring to a machine or a flash drive in MB and having to correct yourself to GB…