rdehuyss a year ago

Hi all, I'm Ronald - the creator of JobRunr.

First of all, thanks to @mooreds to post JobRunr on HackerNews.

Second of all - I read some claims that being in the 'job scheduling' business is easy money. I would like to point out that's not really the case.

With JobRunr being open-source and more successful than I ever could imagine, this brings along a lot of stress. If you make a mistake (which I did in V6) the whole world starts to see it. I also try to keep the amount of open issues really small as these things linger in my head and also give me stress.

Anyway, this to say that I'm now able to provide my family with food but I'm still not break even (meaning if I just had freelanced as before, I would have more money in my bank account).

But, I can now work on something I love.

P.s.: it's indeed LGPL but this is also the case for hibernate. It means you should only open-source if you're touching part of the JobRunr code, not if you just use the lib.

See also https://github.com/jobrunr/jobrunr/discussions/769.

  • upghost a year ago

    You had me at “I assume you’re not interested in the marketing fluff”

  • mperham a year ago

    Congrats Ronald, welcome to the very exclusive club of commercial job system authors.

    • rdehuyss a year ago

      Thanks, @mperham. I'm curious how it all will go!

      Enjoying the ride for the moment - it's wilder than I thought. I must confess that all the visits on the website via HN gave me quite the adrenaline rush :-).

therealrootuser a year ago

Seems like these job scheduling systems are a dime a dozen these days. Since we're an AWS shop, eventually my team ended up just building a system based on EventBridge and Fargate, killing off a previous system built on top of Quartz. Scheduling is all handled via Terraform. It's been solid for several years now, and costs next to nothing to operate. We can parallelize as much or as little as we want.

At the end of the day, I don't want more to run more dedicated boxes for yet another jobs systems. I just want to hand off a container to the ether and say "please run this container until it stops, and do this once an hour or once a day." I don't want to get alerts in the middle of the night telling me that the Quartz scheduler has had some esoteric failure, and I don't want Jobs A, B, and C to get killed because Job D started doing something dumb.

Having a nice UI is cool, but I would rather not have more servers and relational databases and Java-cron libraries that can do dumbness in the middle of the night.

  • sverhagen a year ago

    Within Java, though, Quartz has ruled for years, it's aged, its website has been reorganized a few times so that half the search results end in dead links, it was time for a new contender. But my fear is that someone else takes the crown, with another business opportunity in mind, which is likely to fizzle too, and then the cycle just repeats. Another thread here was saying this is easy money, but are open source or open core or whatever companies really all that often a slam dunk?

    • manigandham a year ago

      > "are open source or open core or whatever companies really all that often a slam dunk?"

      No, they're often not. Many struggle to ever make a profit.

  • kernelbugs a year ago

    Any tips, tricks, or resources for getting started using Fargate for one-off or recurring jobs? I have terraform setup and managing AWS resources, but every time I look into Fargate it seems like guides point towards running webapps instead of diverse jobs.

    • inkyoto a year ago

      You can use the aws_appautoscaling_scheduled_action terraform resource to create a scheduled scaling policy action to mimic a scheduled Fargate container fire-up, e.g. from zero Fargate container instances to one or however many are required, and then back down to zero.

    • shpongled a year ago

      I would look into AWS Batch - it works pretty well for running diverse jobs. I have a few jobs that are triggered by S3 uploads that run for 1-30 minutes, and other jobs that run for ~hours. All on Fargate

  • irl_chad a year ago

    We came to the exact same conclusion. EventBridge time triggers a Fargate task. The job automatically terminates the process after execution, so the container shuts down and all is good.

dvt a year ago

This is the kind of business that is extremely simple, very boring, and heck, even easy to implement, but ends up making the owners 6 figures MRR with a bit of marketing and networking elbow grease. The perfect software lifestyle business.

I love the splash page too: simple and to the point. They aren't saving the world with AI, they're just making better cron jobs.

  • mooreds a year ago

    > they're just making better cron jobs.

    Seems to be good money in job running software. See Sidekiq in the ruby world: https://codecodeship.com/blog/2023-04-14-mike-perham

    • hardwaresofton a year ago

      Yup this was my first thought as well -- they're creating sidekiq-for-java. Lots of enterprise money to be scooped up, I'd assume they have a potential to make even more (especially if they're at it as long as sidekiq has been).

      That said, tech like Temporal and other workflow managers do exist now... But maybe most people won't choose them because jobrunr is just an `mvn install` away.

TimTheTinker a year ago

Reminds me of Hangfire on .NET. I haven't used it since a previous job in 2016, but it was easily one of my most favorite tools. You can have it serialize scheduled tasks to Redis or SQL Server -- you can get the same durability guarantees the underlying storage mechanism has. The API couldn't be more simple to use: https://www.hangfire.io/overview.html

Being able to reliably fire-and-forget or schedule background tasks from a web app can be really powerful.

  • nirav72 a year ago

    Hangfire was definitely a game changer in the .net world back in the day. One of the most difficult things to do in .net before .net core was long running backgrounds jobs from asp.net & IIS without the apppool prematurely killing it. The other option was MSMQ and offloading the task to another process like a windows service. Just allround a major pita.

ysleepy a year ago

Interesting to see this.

I tried it in 2020 and was not very happy with it.

Serialization was (is?) deeply embedded in the API and my use case didn't need that and it was a large burden with no upside in my application. Then there were fluent builders which instead of collecting parameters just executed on them and it made it highly order dependent with possible invalid states without any indication of why.

I'd love a lightweight job scheduler with metadata and a dashboard, but with less magic in the java world.

Maybe I should give it a go again, maybe it has changed.

  • jmartrican a year ago

    if you give it a go, post the link. I'd be interested.

samsquire a year ago

This is an interesting space.

I think it is interesting that job scheduling, dependency graphs, dirty refresh logic, mutually exclusive execution are all relevant in the same space.

I recently working on some Java code to schedule mutually exclusively across threads. I wanted to schedule A if B is not running and then A if B is not running. Then alternately run the two. I think it's traditionally solved with a lock in distributed systems.

I think there is inspiration from GUI update logic too and potential for cache invalidation or cache refresh logic in background job processing systems.

How does JobRunr persist jobs? If I schedule a background job from an inflight request handler, does the job get persisted if there is a crash?

  • manyxcxi a year ago

    It does. Primarily, worker threads query the DB for jobs to pick up. They can write state back when running a job, and they can have configurable (per job type) retry rules.

    So in your case, the crash would likely leave it in an open/running state if it was already picked up, at which point timeout/retry rules would kick in after a restart.

    If the job wasn’t running yet, just queued, then it would be business as usual upon restart.

zmmmmm a year ago

    Don't bother your legal team.
    We're not another SaaS company and we don't have access to your data.
I love it. (even if that's not quite enough to completely ignore legal in my org)
victor106 a year ago

How is this different from Quartz or Spring Batch?

  • duttonw a year ago

    pretty gui and no tooling to re-submit failed jobs.

xvilka a year ago

Something simple as that should not require tons of RAM and CPU because of Java. Writing it in natively compiled language would produce a lean product - Rust, Go, etc.

  • RhodesianHunter a year ago

    Java frequently outperforms Go and Rust. It has a far more mature GC than Go and libraries that have no parallel in either (netty, caffeine).

    Almost every company doing real time/stream processing at the scale of TB/hour or greater is doing so by relying on Apache Kafka, written in Java and Scala.

    I've personally been a member of a team that wrote several services (network collectors, observability processing) in Go/Rust/JVM (we preferred Kotlin) in parallel for performance comparisons and found the JVM services to show much better throughput.

    Your perspective seems quite outdated. Possibly from before Go or Rust even existed?

    • Capricorn2481 a year ago

      Not a Rust user, but I would like a citation on any context in which Java outperforms Rust

      • RhodesianHunter a year ago

        Any long lived process with constant allocation of many small or medium sized objects where your primary performance concern is throughput (such as stream processing).

    • xvilka a year ago

      Regarding Kafka (and other Apache Java tools, e.g. Hadoop) - they are often slower[1] than standard Unix tools.

      [1] https://adamdrake.com/command-line-tools-can-be-235x-faster-...

      • xmcqdpt2 a year ago

        1. Kafka is not really related to Hadoop in any way? Standard unix tools are also way faster than postgres, but we don't use databases or Kafka because of their batch processing performance.

        2. Hadoop is used for batch processing and it can be kind of slow, that's true. However the whole point of Hadoop is that you can operate on actually big data, not the 2GB database in the blog post. If it fits on one HDD then you don't need Hadoop and it will just slow you down!

      • manigandham a year ago

        That has nothing to do with Kafka, or Java as a language.

        Distributed big data processing systems need big data to actually be useful. Small data that fits on a single machine can also be processed on a single machine, which will always be faster than using a cluster with orchestration, distribution and network overhead.

  • kitd a year ago

    Writing it in natively compiled language would produce a lean product - Rust, Go, etc.

    ... Java?

    You can natively compile Java these days.