If you want to watch games on your tractor, please use your own iPad
One of my friends worked for a company that develops technology to help farmers manage their crops. The software component runs on an iPad that is mounted on the dashboard of the farm equipment (think tractor or combine), and it uses GPS and other sensors to track the tractor’s precise location, determine the location of each plant, and then calculate and deliver the optimum amount of farm stuff to farmify each plant, in order to minimize costs while maximizing farmness, while reporting back to the operator on the amount of farmitude and provide guidance on the best path to take.
She told me that the software pushes the iPad to its limits, and the slightest hiccup would result in suboptimal farmization because the correct amount of farm stuff was not delivered in time.
(You can tell that I’m an expert on farming.)
The farmers would often complain that when they returned from the field, the system would report a ton of errors. “Why am I paying all this money for your flaky system?”
The software team studied the data coming in from the field and found that the software was failing to meet its real-time targets due to CPU starvation. They added additional code to identify what was sucking away the CPU time, and quickly they found their culprit.
They told the farmers, “The system would work much better if you stopped using it to watch baseball games.”
Today is Opening Day of Major League Baseball, the top level of professional baseball in the United States and Canada. If you want to watch games on your tractor, please use your own iPad.
Granted, I don’t think that this needs true real time processing, but I would think that if you cannot afford for your process to be scheduled out you should use dedicated hardware.
I think the R&D, manufacturing, and reliability costs for “Use an iPad” are much better than “Design and manufacture your own hardware.”
Well, maybe they should have chosen a Windows- or Linux-based platform instead? At least there you can actually instruct the OS to run your app at the highest priority.
Wait did you mean an actual Apple iPad? That’s hilarious. I thought you just meant it in the sense that there was a screen in the dash instead of gauges. Seems jank
Ahhh, that takes me back. But the farmification software I wrote ran on Windows CE on a Compaq C-series HPC.
Which, uh, had a LOT less grunt than an iPad. I can’t imagine what sort of farmy stuff they’re doing that needs that much processing.
This is just Wirth’s law, which is usually phrased in terms of Moore’s law, but I prefer the following formulation: Software expands to fill the available hardware. If your CPU gets twice as fast, the software written for it will be twice as slow (as compared to software written for the previous CPU that wasn’t twice as fast). You can see this as a good thing or as a bad thing:
Optimistic take: This is good because it means software engineers can devote more resources to bug fixes and new features, and the market values those things more than performance (within reason). Nobody has to bother with exotic data structures or algorithms because simpler and more easily understood code will still be good enough. This is also a function of using higher-level (slower) languages like Python instead of C++.
Pessimistic take: This is bad because, quite often, the new software doesn’t actually have significant new features or bug fixes, or because those things don’t justify the waste of computing resources. In many cases, this has nothing to do with exotic data structures or algorithms, and is instead a function of breaking very basic rules like “don’t block the UI thread” or “don’t assume the network is fast.”
Which of these takes is more accurate will depend on the specific software and hardware involved.