My Mom was a FORTRAN programmer for a defense contractor from the late 50s to the mid 90s. She spent her career doing things like that F16 simulator. The last thing I remember her talking about was one of the B1 or B2 bombers.
When I was elementary school age in the late 70s, I'd go into her office with her sometimes on school holidays that were not work holidays. She'd let me play Colossal Cave Adventure on a terminal in her office. I remember looking at printouts of code she was working on, and seeing a wireframe diagram of some futuristic plane. I think it must have been one of the first Stealth fighters. I wonder how many security protocols she broke by having me there... :)
She passed a way many years ago, but I think she would have been excited to see work like this.
FWIW she probably wasn’t breaking any security protocols - basically as long as the kids can’t read well they are fine to be escorted without the red light.
I used to take my daughter into the SCIF when she was about 2 when needed, and there was a week where I was took my son into work at the Pentagon.
I working for Snake Clark at the time and people kept bringing him candies and treats. I think he was watching cartoons on Nick in the deputy’s office most of the day. He gets a kick out of that story now.
I do a double-take if I see a dog! I've seen babies, but I'm pretty sure anyone that can walk or talk would be a no-go. My assumption is OP wasn't in a facility like that.
Correct.... It was an office in a suburban office park.. I remember there were some areas that I was not allowed in. But I don't know if that was for security reasons, or just due to general non-kid safe areas. I don't remember guards, but it was almost 50 years ago and I really can't recall.
I interviewed at LANL early in my career, and I was struck by how much security there was. My mom's office was nowhere close to that level of security.
Ultimately it’s up to the SSO/SSM/facility manager to make the risk call, and that’s largely based on commander guidance and DSCID inspections from the HQ SSO.
Plenty of shenanigans in JSOC scifs that wouldn’t fly on NSA campus.
When I was growing up computers were getting faster by leaps and bounds, and I remember a video game that was so 'sophisticated' that it needed a math (FP) co-processor (80387) to play some parts of it:
> The program requires a minimum of a 12MHz 80286, 1MB RAM, DOS 5.0 or DR DOS, one 1.2MB 5 1⁄4" or 1.44MB 3 1⁄2" disk drive, hard drive with 11MB of free disk space, and VGA graphics. In addition, Falcon3.O supports a joystick, a joystick with a throttle, dual joysticks, rudder pedals or the ThrustMaster controls. The game also supports a mouse and various sound cards, including the Ad Lib, Sound Blaster and Roland. The 80x87 math coprocessor is supported for the HighFidelity flight model.
> Optimal system requirements are a 20MHz 80386 system or faster, 80x87 math coprocessor, 4MB RAM with EMS (expanded memory), DOS 5.0 or DR DOS, one 1.2MB 51⁄411 or 1.44MB 31⁄2" disk drive, hard drive with 11MB of free space, a 16-bit VGA card, a mouse and a joystick.
I have fond memories of playing the original Falcon on my Amiga 500. It felt like magic after years of playing F15 Strike Eagle on the Apple IIc. Hearing real sound effects kicking in the afterburner and getting too close to the ground ("Pull Up! Pull Up!") were all so satisfying.
I remember being so excited when Falcon 3.0 came out. But it just felt like a let down. The graphics were amazing for the time and it seemed so realistic, but for me the realism is what killed all the fun. As a kid, I didn't actually want to BE an expert F16 pilot. I just wanted to feel like I was. I didn't want to have to learn all the systems and controls.
I think that magic is now gone, back then playing games was a bit like reading a book, we had to make use of our imagination to compensate for the lousy graphics, especially when being able to visit arcades.
Then suddenly having computers at home with similar arcade like graphics felt like the future.
Now we get real time rendering without useful gameplay, in many AAA games.
I felt a similar way when I moved from Chuck Yeager's Air Combat to (I think) F15 Strike Eagle 3. Yeager was the perfect mix for me. I remember rarely even seeing the enemy planes I was firing at in newer games.
Having been the kid who loved to geek out over flight sims, and then the adult who was fortunate enough to have flown military jets, I find the trend for uber-realistic military sims like DCS and such kind of sad in a way.
I mean, the software itself is impressive. But the idea of grown adults geeking out over old versions of the NATOPS and trying to develop tactics and such is frankly cringe. You're never going to get it right, because the actual thing is classified. And from the outside looking in, it's like watching a kid put on Dad or Mom's suit jacket to play "office."
>I find the trend for uber-realistic military sims like DCS and such kind of sad in a way.
But things like Ace Combat, "H.A.W.X", War Thunder, Project Wingman, Nuclear Option, all those games are incredibly popular. The arcade combat genre is alive and well.
I do wish games like VTOL VR, DCS, and other more serious sims had a "maybe don't make me read 700 pages of manual to lock and fire a missile" option. Even something as simple as VTOL VR telling me exactly what machine my targeting pod is locked onto would be nice. Just give me a name and basic specs. It's a damn game, I shouldn't need to memorize target silhouettes unless I want to.
IL-2 Sturmovik: Great Battles actually does this well, with an entire page of "simplify things please" options.
Long-time Navy jet jock finds it "cringe" when people try to get a little break from the stresses of their life by attempting in a very small way to emulate what he achieved.
I get your point but come on man, ease up. At least remember that some of those DCS-playing wage slaves helped fund your adventures.
Falcon 4.0 was one of the worst bugfests I ever encountered, practically unplayable; the modding community did a tremendous job to fix and improve that mess. At the time of its release it was even worse than Privateer 2: The Darkening, a title whose structural imperfections were only matched by its utterly bizarre Britkraft sci-fi setting.
I remember the opposite also. We used to play a lot of Robotron on my roommate's 8086 PC clone. Then he was so excited to upgrade to a '286. Robotron became unplayable. My guess is that it was coded with a lot of busy loops because it was so insanely fast on the 286 that a human simply didn't have time to respond.
I guess they probably came up with a new version, dunno.
This is actually the reason we had "turbo" buttons on PCs back then. It wasn't to overclock. Instead, it was to underclock to some backward-compatible CPU speed that would allow legacy software like that to hopefully work OK.
However, I'm not sure how many different compatibility models were being targeted.
Granted, I probably would've had an easier border crossing if I didn't have a flight map for Kuwaiti airspace, complete with markings for anti-air defenses in the backseat of my car that one time, but it was _still_ worth it :)
Awesome read except I have one nit. Instead of writing translating functions to convert everything from US Customary units to Unity, Metric, and the rest of sanity, you should have converted everything to metric in the formulas. Rewritten the math. It's not that hard, it's not that difficult, saves you all those function calls, improves performance, and simplifies the math a whole bunch.
Air density can become pliable, controllable. Thrust velocities match projectile spec velocities (m/s) making calculating intersect a breeze. Trust me, you want to convert the FORTRAN math into standard metric. They didn't believe in Base10 back then so do the math.
I wanted to keep the data tables exactly the same in the C# translation. While formulas can be converted to metric easily, the data tables do not actually have units defined for any of them, so translating them to metric is non-trivial. And the way data is mixed from multiple tables is very complex. So verifying that the calculations have equivalent results in metric is well beyond my ability.
Or rather, the best way to verify it would be to write a flight model that uses customary units (what I describe in the article), and then use that to verify that the metric flight model is equivalent.
At the end of the day, only the inputs and outputs of the flight model need to be converted, which is just handful of multiplication operations. The flight model is very cheap to run, even with conversions at runtime.
This is exactly what unit tests are for. You did the first (hard part), dissecting the Fortran ark for goodies. You built a Unity equivalent (slightly less hard than deciphering ancient text but still high on difficulty). Now, write tests that test those inputs and outputs (translation calls included). Great, now that all the tests pass, change to metric until the tests pass again.
I know it sounds daunting. Making a painting does too to someone who doesn’t paint. However, there are steps you can follow that make it super easy to create masterpieces and all the greats follow the same process.
So where you translate ft to m or slug/ft to slug/m or however - the surrounding math is perfect for a unit test. Keeping you from having to build another flight model and do analog mark-2 eyeball testing.
You could just leave it - someone will pick it up and do the work.
*EDIT*
I ran some of it through my models and converted AirDataComputer to Kelvin, kg/m, m/s:
public class AirDataComputer {
/// <summary>
/// Density at sea level in kg/m³ (standard atmosphere)
/// </summary>
public const float SeaLevelDensity = 1.225f;
/// <summary>
/// Max altitude in meters
/// </summary>
public const float MaxAltitude = 10668.0f; // ~35,000 ft in meters
/// <summary>
/// Calculates air data based on velocity and altitude using SI units
/// </summary>
/// <param name="velocity">Velocity in m/s</param>
/// <param name="altitude">Altitude in meters</param>
/// <returns>Air data</returns>
public AirData CalculateAirData(float velocity, float altitude) {
const float baseTemperature = 288.15f; // Sea level temp in K (~15°C)
const float minTemperature = 216.65f; // Tropopause temp in K (~ -56.5°C)
const float temperatureGradient = 0.0065f; // Lapse rate in K/m
const float gamma = 1.4f; // Ratio of specific heats
const float gasConstant = 287.05f; // J/(kg·K), specific gas constant for air
const float densityPower = 4.14f;
altitude = Mathf.Clamp(altitude, 0, MaxAltitude);
// Calculate temperature in Kelvin using linear lapse rate model
float temperatureFactor = 1.0f - (temperatureGradient * altitude / baseTemperature);
float T = Mathf.Max(minTemperature, baseTemperature * temperatureFactor);
// Speed of sound in m/s
float speedOfSound = Mathf.Sqrt(gamma * gasConstant * T);
float altitudeMach = velocity / speedOfSound;
// Air density using barometric approximation
float rho = SeaLevelDensity * Mathf.Pow(temperatureFactor, densityPower);
// Dynamic pressure in Pascals (N/m²)
float qBar = 0.5f * rho * velocity * velocity;
return new AirData() {
altitudeMach = altitudeMach,
qBar = qBar
};
}
A nautical mile is ~6,076 feet or exactly 1,852 meters (???).
That is actually defined by distances on Earth (which is of course an approximation, but still ...). So, 1 nautical mile equals to one minute in the 90 degrees hemisphere arc. It's approximately 10k km from equator to the pole, so 10,000km/90/60 equals 1.852km.
> 1 nautical mile equals to one minute in the 90 degrees hemisphere arc
The nautical mile is not an SI unit, so it is not defined by a single organization, Your definition used to be the common definition, but it seems like the relevant organizations has updated the definition to be exactly 1852 m. If the original definition of the meter applies, then it would have been 1851.85 or 15 cm shorter, but with newer measurement of the earth, it would have been more like 1855 m.
> The nautical mile is not an SI unit, so it is not defined by a single organization
"In 1929 the International Hydrographic Bureau obtained an agreement from a large number of countries to adopt a value of 1852 metres for the nautical mile, the unit thus defined to be called the International Nautical Mile."
But there was no treaty or anything with a fancy ceremony, just a 'handshake', and so it was up to each country to adopt it with a domestic law or regulation, which (e.g.) the US did in 1954:
Previously in the US it was 1853.25 m (because the US is actually metric "officially": all of its customary units (ft, oz) are defined in terms of metric equivalents):
> Note that Fortran supports arrays with an arbitrary starting index, in this case -2. So this table supports indices in the range [-2, 9].
That is such a useful feature! Surprised I haven’t seen that more often. So much fiddly code exists that’s just fixing offsets to conform to 0 (or 1 for Lua) -based indexing!
.NET supports this because [Visual] Basic supports it. This can be used from C# - and other languages - but there is no nice syntax supporting it.
// This also supports multidimensional arrays, that is why the parameters are arrays.
var array = Array.CreateInstance(elementType: typeof(Int32), lengths: [ 5 ], lowerBounds: [ -2 ]);
// This does not compile, the type is Int32[*], not Int32[].
// Console.WriteLine(array[0]);
array.SetValue(value: 42, index: -2);
Console.WriteLine(array.GetValue(-2));
Unfortunately, Fortran's implementation of this has some inconsistencies. Doing certain operations will convert from the custom indexing back to 1-based indexing.
Worse than the pitfalls that can arise with a correct compiler is the fact that most Fortran compilers have bugs with non-default lower bounds -- and they're not always the same bugs, so portability is a real problem. The feature is fine as it stood with Fortran '77 dummy arrays.
Ada too, which is of course Pascal-based. Like many programming language features, it feels like it was lost simply because C didn't have it and everyone wanted to copy C.
Yes. I do this a lot when writing linear algebra stuff. All the math texts write things in 1-based notation for matrices. The closer I can make the code match the paper I'm implementing makes life so much easier. Of course there's a big comment at the beginning of the function when I modify the pointer to explain why I'm doing it.
Technically no, a pointer pointing outside of its array (or similar) at any point is undefined behaviour. More importantly for this discussion, without support from the language it's not very ergonomic to work with. What happens when you need to call strlen, memcpy, or free?
It works in this case where you want to move the zero index forward a few cells to a valid offset. It is only UB for the general case where the offset may land outside valid memory. C has always supported negative indices, so moving index zero forward into the middle of the array is fine.
It does and doesn't. You can have any arbitrary index, but that changes the table from being an array to being both an array and a dictionary, some real weird Frankenstein stuff
> but that changes the table from being an array to being both an array and a dictionary
You're confusing the definition of the language with the implementation. In implementation you're right, most runtimes will treat arrays starting at 1 as a special case and optimize that access. The language itself doesn't make that distinction though. Here an array is simply any table indexed by integers. The documentation states it's thusly:
You can start an array at index 0, 1, or any other value
The feature worked fine and was portable in Fortran ‘77, but its interactions with modern features are full of shocking pitfalls even when the compiler implements them correctly, which only two do, so it’s not really portable either.
I'd agree with 1-based indexing problems, but not 0-based, which seems very natural. And if you have -2-based, I'd argue that perhaps you don't want an array.
I think it's down to personal preferences/how you think. I haven't actually used any languages that didn't have 0 based indexing, but I remember it being very painful and super unintuitive to learn, it just didn't make sense at all (still doesn't, but it's not a problem for me anymore). I always thought 1 based would make a lot more sense and be way easier to learn.
None, in my experience. 1-based is far, far likely to introduce errors, as you have to keep adjusting the index to the algorithm and/or what is actually going on underneath.
> as you have to keep adjusting the index to the algorithm and/or what is actually going on underneath.
Yeah, that's mixing both of them. Wouldn't it work as well if they all used 1-based indexing or 0-based indexing? Sounds like the issue was that algorithms/stuff underneath wasn't 1-based.
I would say it is sensible. Given that an array index is an unsigned integer, what are you going to do with an index of zero?
Perhaps I've been influenced by writing a lot of code in assembler, way back when, but zero-based has always seemed completely natural to me, to the extent that I find it very hard to understand algorithms expressed in non-zero based code.
An array index can be signed with no problem. If you're worried about the address calculations, well, the address of an array doesn't have to be the address to its initial element, does it? It can be the address of the element with index 0 (even if an array does not have such an index at all):
arr: array[-2..10] of integer;
pointer(@arr) == pointer(@arr[0])
Or you can use descriptors (dope vectors), but that involves quite an overhead. The books on compilers from the 70s (e.g. Gries's "Compiler construction for digital computers") have rather extensive discussions on both approaches.
Yes, I'm familiar with Pascal indexes, but I don't find them very natural for the kind of programs I write. I want functions/classes to do any sort of translation.
> The books on compilers from the 70s (e.g. Gries's "Compiler construction for digital computers")
Yellow cover, I think? My then GF bought it for Xmas at about 1984 or so. Not the best book on the topic, IMHO.
I think encountering C arrays and pointers rewrote my brain so that 0 based indexing made more sense even though up to that point (~40 years ago) I'd only used languages that used 1 based indexing (Basic and Pascal).
0 based is much simpler for any mathematical calculation done with the indexes. only reasons I can think where you need to handle it is when getting the last index from the length of the array or when interacting with the user. With 1-based you'll need to subtract or add 1 all over the place when doing almost anything
Your issue is you are trying to work out how use to this feature as a zero based array.
EDIT (for more explanation): I have an input value from -10 to 100. I want to use this value to lookup something in a table. IN a ero indexed world I have to know what the lowest value is and subtract that from the input value to get to zero (so "another parameter to pass to function").
With an arbitrary start index the array is just indexed from the lowest value (-10). There is nothing more needing to be passed in.
You might want the element at "position" 0 though (which with the origin at -2 would be the 3rd element). E.g. treat the array index as a coordinate in a 1D coordinate system with user-defined origin.
The fact that this person put a snarky (???) next to the concept of measuring airspeed in knots makes me wonder just how familiar they are with aerospace in general. A knot is a standard measurement in part because one nautical mile is equal to one minute of latitude. Which is useful until the day we decide to start measuring lat/long in radians.
I know it's fashionable in some circles to dump on everything non-metric, but these things are they way they are because of reasons, and airspeed is generally measured in either knots or Mach.
Hi author here, I have worked in the aerospace industry on flight control systems. I'm very familiar with knots as a unit.
I'm just annoyed that so many educational resources and even flight code still use customary units for everything. I believe that metric should be used for all internal calculations and knots should only exist at the UI layer for the pilot.
When I play flight games though, I only have intuition for speed in terms of knots. Like 150 kts is takeoff speed and 400 kts is the corner speed.
I just think that the pilot letting out a spool of knotted rope from their plane is a very silly practice to defend :)
As someone who works in flight simulation I was also taken back hard by this. We are internally metric all the way, but even so the knot is not a weird unit in this space at all. I also never heard about those coordinate conventions.
>airspeed is generally measured in either knots or Mach
boatspeed is also measure in knots... but only very rarely in Mach (and not even primarily because the speed of sound in water is substantially higher than in air)
Historically, I'm fairly certain airspeed was measured in knots because of boatspeed. In software terms, the problem of navigating over the Earth's surface was forked from boats to planes, and why re-invent the wheel?
It also tends to be more of a standard in professional aviation, whereas some small civilian bugsmashers in the US use MPH, presumably because it's easier for the doctors and lawyers of the world to understand after driving their cars, and if you're VFR, who really cares?
I saw slugs and I immediately had PTSD from my days in aero engineering.
Imagine trying to keep track of pounds in a single set of equations using subscripts m and f to differentiate between the units’ use for both mass and force.
This seems like a really reasonable implementation looking at the actual physics loop. From a game design perspective, lookup tables can be a very powerful way for artists to interact with the physics. For example, it makes balancing 2 different vehicles a lot easier in conversation. You could say things like "I think we need to reduce the performance of XYZ between 1000 and 8000 feet to balance with ABC". Inserting a new band into the table is an obvious exercise in Microsoft excel.
Awesome achievement but I have to say the units stressed me out. I hope that in the real world there's some kind of dimensional analysis code linter that verifies no one is comparing slugs to feet or something, and that altitude doesn't go below zero.
In practice, unit checking is almost never done on actual code, though it should be done. From what I recall, some Fortran folks have been trying to get unit checking into the Fortran standard itself since the 1970s without success.
An approach based on static analysis would not have the downsides I listed, but I personally would prefer being unable to compile the code at all if it had an error that could be detected.
In my experience in both the aerospace industry and the video game industry, there are no tools like this in use. In aerospace specifically, errors like that are caught by manual human review, including third-party validation companies. Unit tests are used to sanity check every calculation. And finally, everything is run in a flight simulator before ever going onto a real aircraft.
In video games however, it's the wild west. Math errors there are funny, not deadly.
You actually need the altimeter to be capable of reading below 0 in the real world.
Aircraft altimeters are based on air pressure. The reference pressure is variable based on atmospheric conditions and is decided by controllers, or in the case of uncontrolled airspace, the matching the altimeter of the known altitude that you took off from.
All that is to say, on my home (uncontrolled) airport which is ~10 feet above sea level, I occasionally land at a negative altimeter value since the weather conditions have changed while I am in the air.
Assuming that an altimeter reading must be positive is a bad assumption.
"A Sami measurement of distance; the distance a reindeer can travel before needing to stop to urinate. Today used to describe something that is at a very obscure distance away" (approximately 7.5 km)
The whole imperial/British side has tons of ugly and weird units of measure. Reading old engineering documentation will give you headaches and nightmares from all the inconsistent systems of units.
My property is very irregularly shaped with the longest straight dimension being 6" shy of a quarter mile. The really scary thing is that I recognized that immediately the first time I looked at the property survey map.
Completely NSFW, but I worked at a company where we frequently needed to measure distance in approximate meters. I don't know the full origin, but at some point, after a night of drinking, one of the managers discovered that an elephant penis was approximately one meter.
From then on, any estimated distance was done in EDs.
In keeping with the discussion here a few days ago, this article demonstrates exactly my point about developer types not being the target for computational codes. Their interpolation routine looks extremely verbose already, I cannot imagine them attempting to solve or compute any other more complex formula like that beyond a mere 2d interpolator writing so verbosely like that. The result would be unreadable, to my eyes.
Modern fortran has a bit more modern flavour to it, but formulae being so verbose would make the "formula" aspect of a code disappear.
Math or engineering people do not have problems with short symbol names, that's it. Also our formulas are complex. If you wrote out many of our formulas with long descriptive variable names the structure and relation between the variables would no longer be readable, there would be many more mistakes.
We were educated doing these things on paper with pages and pages of math with these symbols.
20 years later I can still read just this fortran code and guess most of what's going on with no context needed.
I have problems reading "developer code" that boil down to "where the hell is the part of the code that actually does something?!" being perpetually lost in layer among layer of abstraction with the relevant connecting pieces far far apart.
Sometimes I dream about a language with a “blessed” editor that just had a rich 2D formula and matrix notation editor. Having context in the form of a formula is sometimes helpful in remembering what a pesky one-off variable name might mean…
The confusion between force and mass tends not to get appreciated by many folks outside of engineering. Particularly in aerospace when you slip the surly bonds of earth and your mass stops being so bound to the force your feet impart on the floor.
I am pretty firmly in the Z-up camp, I mean I understand the Y-up camp, take a normal picture, or perhaps a side-scroller game, Y is up and down, the logical extension to three dimensions is to push z as depth. But I can't do it, I see X and Y as a in a map where the obvious Z direction is up and down. yes minecraft triggers me so hard, don't even get me started.
I’d heard at least one FBW system was written in Lisp or an interface for it was. I don’t remember which aircraft, but I think the one of the F-15s. It seems like that it was used at that time for some flight system development.
I'm somewhat surprised the physical units matter in a computer simulation. As long as the simulation is internally consistent (which I would assume the original code is) it seems odd to me that the nominal units are important at all.
Is that video game character walking 100 yards? Or maybe the character is 500 feet tall and he's walking 50 leagues?
Author is at least partially reusing Unity physics engine instead of using physics integrator from original simulation. It is not a mechanical 1:1 translation of Fortran code to C#. Once you start mixing the two sims units and other constants matter. Also the unity physics engine is finetuned for operating at certain scale with certain units. You could operate it in different units, but that will potentially require readjusting not just major physics constants but also a bunch of poorly documented magic parameters designed to prevent Unity sim from exploding or wasting resources.
That makes sense, then. If he's porting between two different physics engines, not just putting a visualization layer on top of an existing simulation translated from Fortran to C#.
Gravity doesn't scale this way. Earth's gravitational pull (~10m/s^2) is a constant that ties distance to time, so if you want to observe the same gravitational effects in the same gravitational pull on a small dimensional scale, you need to adjust the time scale to compensate.
This is why scale model sets for practical VFX in cinema were always recorded at higher frame rates and then played back slower.
That's only a problem if you're dealing with real gravity shooting real models in a real physical world. In a simulation, you can set G to whatever it needs to be.
The acceleration defined to simulate gravity in this Fortran code is presumably already proportional to everything else defined in the simulation.
Physical units help modeling and review, it's much easier to sanity check values from just reading, and comparing constants. I've heard that for rendering/raytracing, a lot of people move to model light intensities by actual physical values which makes it much easier to author and compare - such as for example "oh I want a light here that is as bright as this old lightbulb I like so much, but there's sunlight also shining through the window". That's much easier than estimating it in arbitrary units, in particular as here human perception is non-linear. I assume this would be similar for other simulations.
They don't matter as long as the relative values and the relationships hold. For video games where you're making something arcadey you're often implementing the physical behavior in terms of regular old physics but the values used don't have to be real as long as you get the right results. It's just when you're simulating something real it's useful to have units that make intuitive sense and that's often real world units as we've built up a lot of understanding by using them already. In particular with a flight model based on look up tables from real world data it's already in a set of units.
In the late 90's, I worked on porting a FORTRAN reactor simulator to Windows 95/NT/XP. On the order of 30 MLoC for an industry/research joint project that was around 40 years old at the time. It took a long time to compile, but even longer to run. It turned out that disabling swap on Windows was the single biggest runtime performance improvement on machines with around 128-258 MiB of RAM. Windows would cheerfully page out the working set when there was no other non-system source of memory pressure.
PS: The time for SMRs was 40 years ago. Renewables radically transformed the economics such that only fusion reactors, geothermal, and tidal are part of a short list of viable alternatives because fossil fuels and fission need to go the way of whale oil and hunting passenger pigeons.
My Mom was a FORTRAN programmer for a defense contractor from the late 50s to the mid 90s. She spent her career doing things like that F16 simulator. The last thing I remember her talking about was one of the B1 or B2 bombers.
When I was elementary school age in the late 70s, I'd go into her office with her sometimes on school holidays that were not work holidays. She'd let me play Colossal Cave Adventure on a terminal in her office. I remember looking at printouts of code she was working on, and seeing a wireframe diagram of some futuristic plane. I think it must have been one of the first Stealth fighters. I wonder how many security protocols she broke by having me there... :)
She passed a way many years ago, but I think she would have been excited to see work like this.
FWIW she probably wasn’t breaking any security protocols - basically as long as the kids can’t read well they are fine to be escorted without the red light.
I used to take my daughter into the SCIF when she was about 2 when needed, and there was a week where I was took my son into work at the Pentagon.
I working for Snake Clark at the time and people kept bringing him candies and treats. I think he was watching cartoons on Nick in the deputy’s office most of the day. He gets a kick out of that story now.
I do a double-take if I see a dog! I've seen babies, but I'm pretty sure anyone that can walk or talk would be a no-go. My assumption is OP wasn't in a facility like that.
Correct.... It was an office in a suburban office park.. I remember there were some areas that I was not allowed in. But I don't know if that was for security reasons, or just due to general non-kid safe areas. I don't remember guards, but it was almost 50 years ago and I really can't recall.
I interviewed at LANL early in my career, and I was struck by how much security there was. My mom's office was nowhere close to that level of security.
Ultimately it’s up to the SSO/SSM/facility manager to make the risk call, and that’s largely based on commander guidance and DSCID inspections from the HQ SSO.
Plenty of shenanigans in JSOC scifs that wouldn’t fly on NSA campus.
Until your kid reads off some classified terms off a burn bag...
Great story, thank you for sharing!
Your mom was awesome :)
[flagged]
When I was growing up computers were getting faster by leaps and bounds, and I remember a video game that was so 'sophisticated' that it needed a math (FP) co-processor (80387) to play some parts of it:
> The program requires a minimum of a 12MHz 80286, 1MB RAM, DOS 5.0 or DR DOS, one 1.2MB 5 1⁄4" or 1.44MB 3 1⁄2" disk drive, hard drive with 11MB of free disk space, and VGA graphics. In addition, Falcon3.O supports a joystick, a joystick with a throttle, dual joysticks, rudder pedals or the ThrustMaster controls. The game also supports a mouse and various sound cards, including the Ad Lib, Sound Blaster and Roland. The 80x87 math coprocessor is supported for the HighFidelity flight model.
> Optimal system requirements are a 20MHz 80386 system or faster, 80x87 math coprocessor, 4MB RAM with EMS (expanded memory), DOS 5.0 or DR DOS, one 1.2MB 51⁄411 or 1.44MB 31⁄2" disk drive, hard drive with 11MB of free space, a 16-bit VGA card, a mouse and a joystick.
* https://vtda.org/docs/computing/SpectrumHolobyte/Falcon_3.0_...
* https://en.wikipedia.org/wiki/Falcon_3.0
"Wow!" we thought at the time.
I have fond memories of playing the original Falcon on my Amiga 500. It felt like magic after years of playing F15 Strike Eagle on the Apple IIc. Hearing real sound effects kicking in the afterburner and getting too close to the ground ("Pull Up! Pull Up!") were all so satisfying.
I remember being so excited when Falcon 3.0 came out. But it just felt like a let down. The graphics were amazing for the time and it seemed so realistic, but for me the realism is what killed all the fun. As a kid, I didn't actually want to BE an expert F16 pilot. I just wanted to feel like I was. I didn't want to have to learn all the systems and controls.
That and F-18 as well.
I think that magic is now gone, back then playing games was a bit like reading a book, we had to make use of our imagination to compensate for the lousy graphics, especially when being able to visit arcades.
Then suddenly having computers at home with similar arcade like graphics felt like the future.
Now we get real time rendering without useful gameplay, in many AAA games.
I felt a similar way when I moved from Chuck Yeager's Air Combat to (I think) F15 Strike Eagle 3. Yeager was the perfect mix for me. I remember rarely even seeing the enemy planes I was firing at in newer games.
Having been the kid who loved to geek out over flight sims, and then the adult who was fortunate enough to have flown military jets, I find the trend for uber-realistic military sims like DCS and such kind of sad in a way.
I mean, the software itself is impressive. But the idea of grown adults geeking out over old versions of the NATOPS and trying to develop tactics and such is frankly cringe. You're never going to get it right, because the actual thing is classified. And from the outside looking in, it's like watching a kid put on Dad or Mom's suit jacket to play "office."
>I find the trend for uber-realistic military sims like DCS and such kind of sad in a way.
But things like Ace Combat, "H.A.W.X", War Thunder, Project Wingman, Nuclear Option, all those games are incredibly popular. The arcade combat genre is alive and well.
I do wish games like VTOL VR, DCS, and other more serious sims had a "maybe don't make me read 700 pages of manual to lock and fire a missile" option. Even something as simple as VTOL VR telling me exactly what machine my targeting pod is locked onto would be nice. Just give me a name and basic specs. It's a damn game, I shouldn't need to memorize target silhouettes unless I want to.
IL-2 Sturmovik: Great Battles actually does this well, with an entire page of "simplify things please" options.
> You're never going to get it right, because the actual thing is classified.
Except for the stuff leaked on the War Thunder forum.
Or, often Russian, FTP servers. Aaah, good times.
Long-time Navy jet jock finds it "cringe" when people try to get a little break from the stresses of their life by attempting in a very small way to emulate what he achieved.
I get your point but come on man, ease up. At least remember that some of those DCS-playing wage slaves helped fund your adventures.
Yeah the HIGH FIDELITY FLIGHT MODEL required that math co processor. When you turned it on it felt like you had tapped into the WOPR from Wargames!
Falcon 4.0 source code leak led to Falcon BMS, still actively developed today.
Falcon 4.0 was one of the worst bugfests I ever encountered, practically unplayable; the modding community did a tremendous job to fix and improve that mess. At the time of its release it was even worse than Privateer 2: The Darkening, a title whose structural imperfections were only matched by its utterly bizarre Britkraft sci-fi setting.
Privateer 2, as I recall, just had the "Privateer" name slapped on a completely unrelated project due to executive meddling.
That aside, I kind of weirdly liked it. The story was hard as hell to follow, but the setting was interesting in a trippy way.
I remember the opposite also. We used to play a lot of Robotron on my roommate's 8086 PC clone. Then he was so excited to upgrade to a '286. Robotron became unplayable. My guess is that it was coded with a lot of busy loops because it was so insanely fast on the 286 that a human simply didn't have time to respond.
I guess they probably came up with a new version, dunno.
This is actually the reason we had "turbo" buttons on PCs back then. It wasn't to overclock. Instead, it was to underclock to some backward-compatible CPU speed that would allow legacy software like that to hopefully work OK.
However, I'm not sure how many different compatibility models were being targeted.
Falcon game still have a large following today.
Most people play Falcon BMS which is a mod of Falcon 4.
https://www.falcon-bms.com/
I did enjoy the maps that came with it.
Granted, I probably would've had an easier border crossing if I didn't have a flight map for Kuwaiti airspace, complete with markings for anti-air defenses in the backseat of my car that one time, but it was _still_ worth it :)
Awesome read except I have one nit. Instead of writing translating functions to convert everything from US Customary units to Unity, Metric, and the rest of sanity, you should have converted everything to metric in the formulas. Rewritten the math. It's not that hard, it's not that difficult, saves you all those function calls, improves performance, and simplifies the math a whole bunch.
Air density can become pliable, controllable. Thrust velocities match projectile spec velocities (m/s) making calculating intersect a breeze. Trust me, you want to convert the FORTRAN math into standard metric. They didn't believe in Base10 back then so do the math.
Hi, author here.
I wanted to keep the data tables exactly the same in the C# translation. While formulas can be converted to metric easily, the data tables do not actually have units defined for any of them, so translating them to metric is non-trivial. And the way data is mixed from multiple tables is very complex. So verifying that the calculations have equivalent results in metric is well beyond my ability.
Or rather, the best way to verify it would be to write a flight model that uses customary units (what I describe in the article), and then use that to verify that the metric flight model is equivalent.
At the end of the day, only the inputs and outputs of the flight model need to be converted, which is just handful of multiplication operations. The flight model is very cheap to run, even with conversions at runtime.
This is exactly what unit tests are for. You did the first (hard part), dissecting the Fortran ark for goodies. You built a Unity equivalent (slightly less hard than deciphering ancient text but still high on difficulty). Now, write tests that test those inputs and outputs (translation calls included). Great, now that all the tests pass, change to metric until the tests pass again.
I know it sounds daunting. Making a painting does too to someone who doesn’t paint. However, there are steps you can follow that make it super easy to create masterpieces and all the greats follow the same process.
So where you translate ft to m or slug/ft to slug/m or however - the surrounding math is perfect for a unit test. Keeping you from having to build another flight model and do analog mark-2 eyeball testing.
You could just leave it - someone will pick it up and do the work.
*EDIT* I ran some of it through my models and converted AirDataComputer to Kelvin, kg/m, m/s:
I was going to say this! Why not just convert all the constants and units once since the mathematical relationships are the same?
> 1 nautical mile equals to one minute in the 90 degrees hemisphere arc
The nautical mile is not an SI unit, so it is not defined by a single organization, Your definition used to be the common definition, but it seems like the relevant organizations has updated the definition to be exactly 1852 m. If the original definition of the meter applies, then it would have been 1851.85 or 15 cm shorter, but with newer measurement of the earth, it would have been more like 1855 m.
> The nautical mile is not an SI unit, so it is not defined by a single organization
"In 1929 the International Hydrographic Bureau obtained an agreement from a large number of countries to adopt a value of 1852 metres for the nautical mile, the unit thus defined to be called the International Nautical Mile."
* https://usma.org/laws-and-bills/adoption-of-international-na...
* https://en.wikipedia.org/wiki/International_Hydrographic_Org...
But there was no treaty or anything with a fancy ceremony, just a 'handshake', and so it was up to each country to adopt it with a domestic law or regulation, which (e.g.) the US did in 1954:
* https://usma.org/wp-content/uploads/2015/06/Nautical-Mile.pd...
Previously in the US it was 1853.25 m (because the US is actually metric "officially": all of its customary units (ft, oz) are defined in terms of metric equivalents):
* https://usma.org/laws-and-bills/mendenhall-order
"It turns out, Fortran is actually pretty good at translating formulas."
I'm hoping they know that the language's name comes from FORmula TRANslator...
Seems too good of pun to not be deliberate.
Thats_the_joke.jpg
.......that's the joke
> Note that Fortran supports arrays with an arbitrary starting index, in this case -2. So this table supports indices in the range [-2, 9].
That is such a useful feature! Surprised I haven’t seen that more often. So much fiddly code exists that’s just fixing offsets to conform to 0 (or 1 for Lua) -based indexing!
.NET supports this because [Visual] Basic supports it. This can be used from C# - and other languages - but there is no nice syntax supporting it.
TIL thanks!
Unfortunately, Fortran's implementation of this has some inconsistencies. Doing certain operations will convert from the custom indexing back to 1-based indexing.
https://github.com/sourceryinstitute/fidbits/blob/master/src...
https://fortran-lang.discourse.group/t/just-say-no-to-non-de...
Worse than the pitfalls that can arise with a correct compiler is the fact that most Fortran compilers have bugs with non-default lower bounds -- and they're not always the same bugs, so portability is a real problem. The feature is fine as it stood with Fortran '77 dummy arrays.
Pascal (and Modula if I'm not mistaken) supports this too.
Ada too, which is of course Pascal-based. Like many programming language features, it feels like it was lost simply because C didn't have it and everyone wanted to copy C.
Reminds me of this fantastic talk: https://www.youtube.com/watch?v=wo84LFzx5nI
Many BASIC dialects as well.
Hence why the whole base index discussion only became relevant in C based languages.
In C can't you just offset pointer and then you'll be able to index with arbitrary starting value?
Yes. I do this a lot when writing linear algebra stuff. All the math texts write things in 1-based notation for matrices. The closer I can make the code match the paper I'm implementing makes life so much easier. Of course there's a big comment at the beginning of the function when I modify the pointer to explain why I'm doing it.
Technically no, a pointer pointing outside of its array (or similar) at any point is undefined behaviour. More importantly for this discussion, without support from the language it's not very ergonomic to work with. What happens when you need to call strlen, memcpy, or free?
It works in this case where you want to move the zero index forward a few cells to a valid offset. It is only UB for the general case where the offset may land outside valid memory. C has always supported negative indices, so moving index zero forward into the middle of the array is fine.
Lua actually has arbitrary indexing, it's just that some iterator functions in the standard library assume arrays begin at 1.
It does and doesn't. You can have any arbitrary index, but that changes the table from being an array to being both an array and a dictionary, some real weird Frankenstein stuff
> but that changes the table from being an array to being both an array and a dictionary
You're confusing the definition of the language with the implementation. In implementation you're right, most runtimes will treat arrays starting at 1 as a special case and optimize that access. The language itself doesn't make that distinction though. Here an array is simply any table indexed by integers. The documentation states it's thusly:
The feature worked fine and was portable in Fortran ‘77, but its interactions with modern features are full of shocking pitfalls even when the compiler implements them correctly, which only two do, so it’s not really portable either.
I'd agree with 1-based indexing problems, but not 0-based, which seems very natural. And if you have -2-based, I'd argue that perhaps you don't want an array.
I think it's down to personal preferences/how you think. I haven't actually used any languages that didn't have 0 based indexing, but I remember it being very painful and super unintuitive to learn, it just didn't make sense at all (still doesn't, but it's not a problem for me anymore). I always thought 1 based would make a lot more sense and be way easier to learn.
The zero-based approach makes sense in C, where it is syntactic sugar and `a[i]` is equal to `*(a + i)` Treating it as offset of 0 is logical.
The more you go away from raw pointer semantics the less intuitive it gets.
For extra fun, in C you can write i[a] since addition is commutative *(i+a) == *(a+i)
I wonder how many of the "off by one" issues/bugs we encounter in the wild are because of arrays typically using 0-based indexing vs 1-based indexing.
None, in my experience. 1-based is far, far likely to introduce errors, as you have to keep adjusting the index to the algorithm and/or what is actually going on underneath.
> as you have to keep adjusting the index to the algorithm and/or what is actually going on underneath.
Yeah, that's mixing both of them. Wouldn't it work as well if they all used 1-based indexing or 0-based indexing? Sounds like the issue was that algorithms/stuff underneath wasn't 1-based.
I would say it is sensible. Given that an array index is an unsigned integer, what are you going to do with an index of zero?
Perhaps I've been influenced by writing a lot of code in assembler, way back when, but zero-based has always seemed completely natural to me, to the extent that I find it very hard to understand algorithms expressed in non-zero based code.
An array index can be signed with no problem. If you're worried about the address calculations, well, the address of an array doesn't have to be the address to its initial element, does it? It can be the address of the element with index 0 (even if an array does not have such an index at all):
Or you can use descriptors (dope vectors), but that involves quite an overhead. The books on compilers from the 70s (e.g. Gries's "Compiler construction for digital computers") have rather extensive discussions on both approaches.Yes, I'm familiar with Pascal indexes, but I don't find them very natural for the kind of programs I write. I want functions/classes to do any sort of translation.
> The books on compilers from the 70s (e.g. Gries's "Compiler construction for digital computers")
Yellow cover, I think? My then GF bought it for Xmas at about 1984 or so. Not the best book on the topic, IMHO.
I think encountering C arrays and pointers rewrote my brain so that 0 based indexing made more sense even though up to that point (~40 years ago) I'd only used languages that used 1 based indexing (Basic and Pascal).
0 based is much simpler for any mathematical calculation done with the indexes. only reasons I can think where you need to handle it is when getting the last index from the length of the array or when interacting with the user. With 1-based you'll need to subtract or add 1 all over the place when doing almost anything
not sure why would i want that.
Now to get the 3rd element from array, you have to know the start index, so another parameter to pass to function.
Your issue is you are trying to work out how use to this feature as a zero based array.
EDIT (for more explanation): I have an input value from -10 to 100. I want to use this value to lookup something in a table. IN a ero indexed world I have to know what the lowest value is and subtract that from the input value to get to zero (so "another parameter to pass to function").
With an arbitrary start index the array is just indexed from the lowest value (-10). There is nothing more needing to be passed in.
You might want the element at "position" 0 though (which with the origin at -2 would be the 3rd element). E.g. treat the array index as a coordinate in a 1D coordinate system with user-defined origin.
When you pass the array to a function, it is 1-indexed by default in the body of that function, unless that function sets a specific starting index.
The fact that this person put a snarky (???) next to the concept of measuring airspeed in knots makes me wonder just how familiar they are with aerospace in general. A knot is a standard measurement in part because one nautical mile is equal to one minute of latitude. Which is useful until the day we decide to start measuring lat/long in radians.
I know it's fashionable in some circles to dump on everything non-metric, but these things are they way they are because of reasons, and airspeed is generally measured in either knots or Mach.
Hi author here, I have worked in the aerospace industry on flight control systems. I'm very familiar with knots as a unit.
I'm just annoyed that so many educational resources and even flight code still use customary units for everything. I believe that metric should be used for all internal calculations and knots should only exist at the UI layer for the pilot.
When I play flight games though, I only have intuition for speed in terms of knots. Like 150 kts is takeoff speed and 400 kts is the corner speed.
I just think that the pilot letting out a spool of knotted rope from their plane is a very silly practice to defend :)
I learnt to fly in metric, every time I see feet and feet-per-minute in sims I'm confused as hell, and you can't ask virtual ATC to switch to metric.
At least nautical miles and knots have some use on this planet specifically...
As someone who works in flight simulation I was also taken back hard by this. We are internally metric all the way, but even so the knot is not a weird unit in this space at all. I also never heard about those coordinate conventions.
>airspeed is generally measured in either knots or Mach
boatspeed is also measure in knots... but only very rarely in Mach (and not even primarily because the speed of sound in water is substantially higher than in air)
Historically, I'm fairly certain airspeed was measured in knots because of boatspeed. In software terms, the problem of navigating over the Earth's surface was forked from boats to planes, and why re-invent the wheel?
It also tends to be more of a standard in professional aviation, whereas some small civilian bugsmashers in the US use MPH, presumably because it's easier for the doctors and lawyers of the world to understand after driving their cars, and if you're VFR, who really cares?
US customary is prevalent because USA after WW2 was the China of aviation - market got flooded by aircraft that had us customary units.
Nautical miles and knots were present in long range navigation, especially oceanic
This is an epic blog post!
In the words of Jerry Maguire: "You had me at hello."
This early sentence really hooked me:
My favourite part: The image comparing X/Y/Z axis between different 3D modelling products: https://vazgriz.com/wp-content/uploads/2025/05/EmVSW5AW8AAoD...Plus the comment below can make you laugh hard enough to get Coca Cola up your nose!
I saw slugs and I immediately had PTSD from my days in aero engineering.
Imagine trying to keep track of pounds in a single set of equations using subscripts m and f to differentiate between the units’ use for both mass and force.
This seems like a really reasonable implementation looking at the actual physics loop. From a game design perspective, lookup tables can be a very powerful way for artists to interact with the physics. For example, it makes balancing 2 different vehicles a lot easier in conversation. You could say things like "I think we need to reduce the performance of XYZ between 1000 and 8000 feet to balance with ABC". Inserting a new band into the table is an obvious exercise in Microsoft excel.
Awesome achievement but I have to say the units stressed me out. I hope that in the real world there's some kind of dimensional analysis code linter that verifies no one is comparing slugs to feet or something, and that altitude doesn't go below zero.
In practice, unit checking is almost never done on actual code, though it should be done. From what I recall, some Fortran folks have been trying to get unit checking into the Fortran standard itself since the 1970s without success.
I was able to coerce Fortran's type system into checking units, but it comes with quite a few downsides: https://fortran-lang.discourse.group/t/compile-time-unit-che...
An approach based on static analysis would not have the downsides I listed, but I personally would prefer being unable to compile the code at all if it had an error that could be detected.
Elaborate typing is what Ada was for.
Hi, author here.
In my experience in both the aerospace industry and the video game industry, there are no tools like this in use. In aerospace specifically, errors like that are caught by manual human review, including third-party validation companies. Unit tests are used to sanity check every calculation. And finally, everything is run in a flight simulator before ever going onto a real aircraft.
In video games however, it's the wild west. Math errors there are funny, not deadly.
You actually need the altimeter to be capable of reading below 0 in the real world.
Aircraft altimeters are based on air pressure. The reference pressure is variable based on atmospheric conditions and is decided by controllers, or in the case of uncontrolled airspace, the matching the altimeter of the known altitude that you took off from.
All that is to say, on my home (uncontrolled) airport which is ~10 feet above sea level, I occasionally land at a negative altimeter value since the weather conditions have changed while I am in the air.
Assuming that an altimeter reading must be positive is a bad assumption.
I've seen a few libraries that attempt to put strong unit types into languages, to use the type system to ensure correctness.
In rust I'm familiar with https://docs.rs/uom/latest/uom/ and typescript I've seen safe-units https://jscheiny.github.io/safe-units/ used.
> altitude doesn't go below zero
Uhm...
https://www.c-130.net/forum/viewtopic.php?p=9974&sid=bb425b3...
The "UNITS" part is wonderful. I had no idea slugs even existed.
My favorite "strange unit" is poronkusema:
"A Sami measurement of distance; the distance a reindeer can travel before needing to stop to urinate. Today used to describe something that is at a very obscure distance away" (approximately 7.5 km)
https://en.wikipedia.org/wiki/Obsolete_Finnish_units_of_meas...
There's no reason for this to obsolete, I have to factor this into route planning for myself so I vote we bring this one back.
The 'List of non-coherent units of measurement' [1] and 'List of humorous units of measurement' [2] pages are always a fun rabbit hole to go down.
[1] https://en.wikipedia.org/wiki/List_of_non-coherent_units_of_...
[2] https://en.wikipedia.org/wiki/List_of_humorous_units_of_meas...
Ah, I now learned that "One moment, please" means that I'll have to expect to wait 90 seconds.
The whole imperial/British side has tons of ugly and weird units of measure. Reading old engineering documentation will give you headaches and nightmares from all the inconsistent systems of units.
My favourite one was when I learned about the existence of US survey feet.
My property is very irregularly shaped with the longest straight dimension being 6" shy of a quarter mile. The really scary thing is that I recognized that immediately the first time I looked at the property survey map.
Completely NSFW, but I worked at a company where we frequently needed to measure distance in approximate meters. I don't know the full origin, but at some point, after a night of drinking, one of the managers discovered that an elephant penis was approximately one meter.
From then on, any estimated distance was done in EDs.
Of interest: https://github.com/ericstoneking/42
A serious spacecraft simulator. Likewise developed from ancient FORTRAN code, in this case ported to C.
In keeping with the discussion here a few days ago, this article demonstrates exactly my point about developer types not being the target for computational codes. Their interpolation routine looks extremely verbose already, I cannot imagine them attempting to solve or compute any other more complex formula like that beyond a mere 2d interpolator writing so verbosely like that. The result would be unreadable, to my eyes.
Modern fortran has a bit more modern flavour to it, but formulae being so verbose would make the "formula" aspect of a code disappear.
Math or engineering people do not have problems with short symbol names, that's it. Also our formulas are complex. If you wrote out many of our formulas with long descriptive variable names the structure and relation between the variables would no longer be readable, there would be many more mistakes.
We were educated doing these things on paper with pages and pages of math with these symbols.
20 years later I can still read just this fortran code and guess most of what's going on with no context needed.
I have problems reading "developer code" that boil down to "where the hell is the part of the code that actually does something?!" being perpetually lost in layer among layer of abstraction with the relevant connecting pieces far far apart.
Sometimes I dream about a language with a “blessed” editor that just had a rich 2D formula and matrix notation editor. Having context in the form of a formula is sometimes helpful in remembering what a pesky one-off variable name might mean…
I just learned about the slug and I‘m absolutely flabbergasted that this exists.
Other "interesting" units include: shakes, barns and bans
https://en.wikipedia.org/wiki/Shake_(unit)
https://en.wikipedia.org/wiki/Barn_(unit)
https://en.wikipedia.org/wiki/Hartley_(unit)
The confusion between force and mass tends not to get appreciated by many folks outside of engineering. Particularly in aerospace when you slip the surly bonds of earth and your mass stops being so bound to the force your feet impart on the floor.
On the subject of coordinate axis.
I am pretty firmly in the Z-up camp, I mean I understand the Y-up camp, take a normal picture, or perhaps a side-scroller game, Y is up and down, the logical extension to three dimensions is to push z as depth. But I can't do it, I see X and Y as a in a map where the obvious Z direction is up and down. yes minecraft triggers me so hard, don't even get me started.
Great writeup! Here is the result of my take on the same process - but translating to JavaScript instead so the simulator runs in the browser.
I later added the high fidelity flight dynamics model by translating C code.
https://github.com/kristoffer-dyrkorn/flightsimulator
I’d heard at least one FBW system was written in Lisp or an interface for it was. I don’t remember which aircraft, but I think the one of the F-15s. It seems like that it was used at that time for some flight system development.
I'm somewhat surprised the physical units matter in a computer simulation. As long as the simulation is internally consistent (which I would assume the original code is) it seems odd to me that the nominal units are important at all.
Is that video game character walking 100 yards? Or maybe the character is 500 feet tall and he's walking 50 leagues?
Author is at least partially reusing Unity physics engine instead of using physics integrator from original simulation. It is not a mechanical 1:1 translation of Fortran code to C#. Once you start mixing the two sims units and other constants matter. Also the unity physics engine is finetuned for operating at certain scale with certain units. You could operate it in different units, but that will potentially require readjusting not just major physics constants but also a bunch of poorly documented magic parameters designed to prevent Unity sim from exploding or wasting resources.
That makes sense, then. If he's porting between two different physics engines, not just putting a visualization layer on top of an existing simulation translated from Fortran to C#.
Gravity doesn't scale this way. Earth's gravitational pull (~10m/s^2) is a constant that ties distance to time, so if you want to observe the same gravitational effects in the same gravitational pull on a small dimensional scale, you need to adjust the time scale to compensate.
This is why scale model sets for practical VFX in cinema were always recorded at higher frame rates and then played back slower.
That's only a problem if you're dealing with real gravity shooting real models in a real physical world. In a simulation, you can set G to whatever it needs to be.
The acceleration defined to simulate gravity in this Fortran code is presumably already proportional to everything else defined in the simulation.
Right, but that's more effort than just using real units in the first place.
Physical units help modeling and review, it's much easier to sanity check values from just reading, and comparing constants. I've heard that for rendering/raytracing, a lot of people move to model light intensities by actual physical values which makes it much easier to author and compare - such as for example "oh I want a light here that is as bright as this old lightbulb I like so much, but there's sunlight also shining through the window". That's much easier than estimating it in arbitrary units, in particular as here human perception is non-linear. I assume this would be similar for other simulations.
They don't matter as long as the relative values and the relationships hold. For video games where you're making something arcadey you're often implementing the physical behavior in terms of regular old physics but the values used don't have to be real as long as you get the right results. It's just when you're simulating something real it's useful to have units that make intuitive sense and that's often real world units as we've built up a lot of understanding by using them already. In particular with a flight model based on look up tables from real world data it's already in a set of units.
In mechanics simulations we often change the units to avoid adding small and large floats.
They don‘t, but they have to be consistent. You wouldn‘t want to apply a hundred-fold torque because you mixed up your units.
Would you want to fly in an aircraft that's been modelled with pretend computer units? Or the one that was modelled correctly in a simulator? :)
Ok but if you did a global find-replace for "meters" with "smoots" do you think that the simulated plane would crash?
Ah but smoots would still follow the same maths as a meter right?
Yes, that's how numbers work. 10 smoots + 1 smoot is 11 smoots.
That's the point of my question.
Here is Clojure implementation of Space Simulator: https://wedesoft.github.io/sfsim/
This was such a thorough and entertaining writeup. Felt like art in a way. Utterly useless, but great historical reference and really funny at parts.
I agree, this was a great read.
In the late 90's, I worked on porting a FORTRAN reactor simulator to Windows 95/NT/XP. On the order of 30 MLoC for an industry/research joint project that was around 40 years old at the time. It took a long time to compile, but even longer to run. It turned out that disabling swap on Windows was the single biggest runtime performance improvement on machines with around 128-258 MiB of RAM. Windows would cheerfully page out the working set when there was no other non-system source of memory pressure.
PS: The time for SMRs was 40 years ago. Renewables radically transformed the economics such that only fusion reactors, geothermal, and tidal are part of a short list of viable alternatives because fossil fuels and fission need to go the way of whale oil and hunting passenger pigeons.
Airplanes need air to fly [citation needed].