rockemsockem 3 years ago

There have been a lot of posts like this that boil down to "software engineering should be more like other engineering". What I find in all these threads is that people seem to think that just because there are certain standards and certification bodies that license engineers that all engineers of that sort participate.

They don't.

Computer engineers, electrical engineers, mechanical engineers, aerospace engineers, and probably more that I'm not specifically familiar with, design things like: cellular antennas, CPUs, rocket engines, spacecraft, EV charging stations, etc without these heavyweight certification structures. I tend to think we're better off without all that unnecessary red tape. There are better ways IMO to ensure that critical systems are engineered properly than to make engineers pay to take tests and pay license dues. I am also not generally a fan of saying that some body is in charge of all engineering standards for something.

  • velcrovan 3 years ago

    From the second paragraph: ‘I have no vision for seeing this implemented in the real world and do not pretend that it is feasible given the current landscape of incentives. Again, this is fantasy; approach it as a creative “world-building” exercise for a speculative civilization, rather than as a serious policy proposal.’

    > [Engineers] … design things like: cellular antennas, CPUs, rocket engines, spacecraft, EV charging stations, etc without these heavyweight certification structures.

    Correct, this is the current reality, which I agree is very distinct from what I describe in this fantasy worldbuilding exercise.

yjftsjthsd-h 3 years ago

I think the rings concept should be extended to have further down rings. It's kind of fun as a fantasy, but the pain point on this is that for a lot of things it genuinely doesn't matter if the computer side of things kind of sucks. I would be totally 100% in favor of all electronics sold on the mass market having a designated mark on them that says what level of assurance they were built with, but I don't think I mind people being allowed to sell things at ring three or four all the way down to the equivalent of what happens today. Obviously there are some cases where this isn't acceptable; medical, avionics, anything where humans can die obviously needs to be in ring zero, and I agree with other commenters in this thread that financial stuff should be at a fairly high assurance level as well.

  • velcrovan 3 years ago

    I had more rings in mind, just haven’t thought it through yet. There might not need to be that many more though. Anything involving any communication at all would probably get an instant Ring 1. Possibly anything and everything else could go in a catch-all Ring 2. (PRs welcome)

wyager 3 years ago

This is an incredibly lame and uncreative vision for the future. Imagine wishing for additional bureaucratic overhead when we have technologies like formal verification which can achieve greater things at lower cost.

Industries should play to their comparative advantages. A huge and mostly untapped comparative advantage of computers is that they reify formal systems, completely unlike a suspension bridge or an office building. If your vision of the future of computers doesn't take that into account, it's worth little.

  • velcrovan 3 years ago

    Can you clarify what you mean by formal verification, and why this scheme necessarily excludes its use?

al2o3cr 3 years ago

    If a system deviates from its design, the contractor becomes
    liable for any such failure, whether related to the specific deviation
    or not.
This seems a recipe for an eternal blame-game where the "engineer of record" aims to produce a plausible-but-impossible specification and manufacturers spend 99% of their time looking for loopholes.

If your aim is to explain why your scifi setting has an entire planet covered with a kilometer-thick layer of used triplicate forms, this would be a great place to start. :P

  • velcrovan 3 years ago

    This is a fair point, probably more so if applied to our own existing tech landscape, which is weighted in favor of innovation and proliferation. But in other fields (i.e. electrical engineering) a similar division of responsibility has been in use for decades and has not proven totally unmanageable.

    • kcb 3 years ago

      But that's not all electrical engineering. Most electrical engineering jobs don't require a professional license.

      • velcrovan 3 years ago

        Sure. The same would probably be the case here. Someone has to be the engineer of record. But not everyone who works in software would have to be a PE.

        • tonyarkles 3 years ago

          I’m saying this as someone who finds what you’ve written interesting and somewhat resonating with me: many many EE designs have zero need for an engineer of record. The vast majority of electronics designs in the world have never had a PEng stamp come anywhere near them. Of the EE PEngs I know, I can’t think of one who has ever actually used their stamp, they just got it either symbolically or for a small pay bump.

          • velcrovan 3 years ago

            That makes sense. Most of the EE PEngs I know are designing electrical systems for data centers and other commercial facilities. Their stamp is the reason they have their job. Even in that sphere, most of the engineers are not PEs. But, there is still a PE role that is legally required before the design can be implemented.

            If we are talking about electronics, then yes, the current reality is that EE stamps are generally not needed in the legal sense. This thing I wrote is a first step in imagining what sort of world would exist if electronics and software engineering had a legally-required PE level.

    • daniel-cussen 3 years ago

      Yes there's many different cultures in different fields.

johngalt 3 years ago

Quality standards are certainly required, and the software/technology ecosystem is brittle in avoidable ways. However, I'm not sure we can simply roll forward with methods we used to build bridges and use them to build software.

I expect that we will see quality management going the other way. Deriving new and more capable ways of dealing with the complexity of modern technology stacks. Then those methods backpropagate to how we manage other complex systems. As we saw with the 737 MAX, in many cases the "engineer signing off" is merely a figurehead/scapegoat. Could one person realistically know the details of every system (to a detailed engineering level) in a modern airliner? Even if they could, is that the best approach?

Our methods for durability are already overextended. Technology will be the area that makes us admit it, and provides the tools to manage it better.

mikewarot 3 years ago

Operating Systems exist to safely multiplex available resources, and make them available to the user for running programs.

In my opinion, the closest analogy is that of electrical power. In that world we have circuits and protection systems that have been standardized. The user interface at the end is an outlet of some form on a wall in a room. The circuit it is part of is protected against overload, and WILL NOT deliver more than the rated breaker output for more than a few cycles.

Any Operating system has to at minimum, provide such a way to run a program. Our current options, Linux (even SELinux), MacOS, Windows, etc. do not offer a way for the user to dynamically assign processing time and resources (such as files, network addresses, etc) to a program.

  • yellowapple 3 years ago

    The analogy comparing operating systems to electrical grids reminds me a lot of Multics, which was designed around the idea that computing power would be a public utility like electricity or water, and people would have home terminals connected to giant public mainframes.

    Of course, history didn't quite work that way, what with the proliferation of microcomputers - though sometimes I feel like the modern trend of web applications has managed to sloppily reinvent that idea.

  • tengwar2 3 years ago

    > The circuit it is part of is protected against overload, and WILL NOT deliver more than the rated breaker output for more than a few cycles.

    Actually it will, and it is required to do so. There are different classes of fuse and circuit breaker. The time to break the circuit depends on the class and the percentage overload, but think in terms of seconds to minutes, not cycles.

    The rationale for this is:

    A) With modern PVC insulation, the limit on continuous current is the continuous temperature. If it is too high, the wire core will slowly migrate through the PVC. Anything up to 90℃ is ok (British standards - and yes, I did check that!). Higher temperatures are ok for short periods because there will not be time for migration, but obviously you don't want to run it hot enough to cause fire ok allow fast migration.

    B) Electrical motors can have high inrush current when they start, but only for short periods. It makes more sense to dimension the wiring for the steady state, and use a show blow fuse or circuit breaker to guard against stall conditions.

    Still, the analogy is a good one. A few years ago I built a garage and had to have a Part P inspection done on the wiring, which included showing calculations which take account of the thermal environment (in free air, on wall, in tube with/without other cables etc.) and cable type to determine the maximum permitted current for that circuit. There was no "consenting adult" exception (although our legislation expressly permits non-professionals to do the job subject to Part P inspection).

    • mikewarot 3 years ago

      I knew as I wrote it, someone was going to pick apart the details... I tried to hedge my bets.

      The new arc protection breakers look interesting to me.

Arainach 3 years ago

>A design may include other certified hardware or software designs as components or layers, but liability for failures in a design remains fully with that design’s engineer of record; it cannot legally be passed through to its components’ engineer of record.

How can this ever make sense? So if a bridge fails because the supplier of rebar messed up the heat treatment, the bridge designer is liable? What this means in practice is no systems larger than those that can be practically understood by a single person - possibly from atoms and first principles - and no systems that can't be audited by one person in time shorter than the project would become irrelevant.

  • linkdink 3 years ago

    That's already how the world works. The bridge design company takes out insurance reflecting their liability. If something happens, they pay. If their supplier messed up a component, they can turn around and sue the supplier. Licensed professional engineers are already personally responsible for their projects to some extent, which can be very big and complicated. They're covered by professional liability insurance. All of this has been around for a long time.

  • EvanAnderson 3 years ago

    Manufacturers independently test raw materials coming from suppliers, sub-assemblies from contract manufacturers, etc. If we're going to consider software a "supply chain" it seems like a good idea to subject it to the same rigor.

    Speaking about real world manufacturing: I'm ignorant of the details but I would assume there's some construct in law or regulation for liability to be assigned back "up the chain" if due diligence in statistically sound testing "down the chain" is demonstrated.

  • atoav 3 years ago

    I mean if something I design as an electrical engineer kills people because it was faulty, it doesn't matter if the components sucked, because it is my responsibility to choose components that do not suck and to test them as well.

    As someone who also does web development I know that this degree of liability is completely foreign to a big part of the software world, and IMO this is an part of the reason software still sucks big time in terms of efficiency, reliabilty, safety and security.

    Because if you have to think about liability suddenly the simple rugged system starts to look lot more attractive.

    • pdimitar 3 years ago

      You should keep in mind that a good chunk of programmers absolutely want to invest further into safety but are never ever allowed to do so by management.

      At one point you learn to shrug it off and make notes to maybe revisit the code one day and fix the dangling bits. Alas, that time almost never comes because people get demotivated and leave the companies. Which is very normal.

  • AdamH12113 3 years ago

    Messing up the heat treatment isn't a design failure, it's a construction failure. The design specifies acceptable parameters for the rebar. If the rebar is within those parameters but the bridge fails because the parameters were wrong, the designer is responsible. If the rebar was out of the specified parameters, the contractor is responsible.

    If I'm reading the article right, "design" would encompass interaction boundaries as well as things like CPU/memory requirements, information storage, and security.

  • abeppu 3 years ago

    > liability for failures in a design remains fully with that design’s engineer of record

    > So if a bridge fails because the supplier of rebar messed up the heat treatment

    Is a failure of a bridge necessarily a failure in a design? Or do we distinguish between a physical project which uses a design from the design itself? If the materials were bad, whether or not the construction team had some responsibility to re-test those materials prior to using them (is this common?), it doesn't seem like evidence that the _design_ failed.

  • velcrovan 3 years ago

    Earlier on it says “If a system deviates from its design, the contractor becomes liable for any such failure, whether related to the specific deviation or not.”

    And yes, you’re probably right that systems would have to be much more legible. I’m interested in imagining what that tech landscape would look like.

selfsimilar 3 years ago

I've often thought this makes a lot of sense not just for medical devices/software, but for any government and financial software as well. But it's somewhat predicated on the 'provable correct' part, which is not well adopted.

mbrodersen 3 years ago

We already have a world class way to make software reliable without a mountain of bureaucracy: formal methods. As in using proof assistants to prove code correct. It is still too labour intensive for most applications but it is getting better every year. Making it illegal to not prove software correct when writing life critical software would accelerate adaption tremendously.

tyingq 3 years ago

"The computers that control your car"

That maybe used to be a good example of "durable computronics". It's increasingly not true. Seems to be for a variety of reasons though, not just one. Things like DRM, lower expectations of usable lifespan of a car, higher performance needs for complex safety systems, expectations around things like large displays, etc.

zamalek 3 years ago

Software development is not engineering, it is artisinal. We need to stop kidding ourselves, 90% of the process is creative.