pfdietz 11 days ago

Where do tests with unlimited running time fit in that? For example, fuzzing or property based testing, where you can potentially continue running the tests forever, hoping new bugs show up.

acrophiliac 11 days ago

Dear Mr. Stewart, I'm glad that this strategy works for your organization, but personally, this sounds like a terrible idea. It's as if I was a carpenter organizing my tools and instead of wrenches, hammers, and saws I call them small, medium, and large tools. Why would you deliberately omit any reference to their function or role?

  • tantalor 11 days ago

    To some extent, people do organize their tools like that.

    Big stuff has constraints on space, access, materials loading, special power, safety etc

    I don't mean you put all the big stuff in the same spot. But if you consider them all as "the big stuff" then you can plan your shop around them and gain efficiency.

    Same for the small stuff, they have different needs like drawers, hangers, etc.

  • MrJohz 11 days ago

    The assumption here is that the classical categories of test (integration, unit, e2e, etc) are meaningfully different in the same way that a hammer is meaningfully different from a saw.

    I don't think that assumption holds up - or at least, where there are meaningful differences between different types of test, those differences are more to do with how quickly the tests run, how flaky they are, and other qualities.

    In my experience, most definitions of integration test are more like rules of thumb - they're only useful up to a point. And when they are useful, it's often because we're using the definition as a proxy for a different metric. Take the idea that integration tests are tests that do IO - integrate with the DB, make HTTP requests, etc. In an fairness, it is often useful to isolate these tests because they're more likely to be slow or flaky. I often see people talk about running unit tests after every save, and integration tests after every commit, which would make sense if your test tends to run so slowly that running it after every save slows you down.

    But in the project I'm working on at the moment, I have a good number of tests that use a locally running database, and it takes about 100ms to run all of them. That's not slow at all - that's very much fast enough that I want them to run whenever I save my code.

    And at that point, what is the meaningful, useful difference between integration tests and unit tests? I want both types of test to be as quick as possible, and I want to run them as often as I can get away with. I write both types of test in mostly the same style, and if you didn't see the database parameter in my functions, you probably wouldn't tell the difference between my unit tests and my integration tests. I have the same metric for both types of test: they should fail if and only if the specific attribute I'm testing does not hold.

  • malkia 11 days ago

    check any major cloud provider - dozen of different vm sizes