> You don't need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far.
Every time I see this I wonder how many amateur/hobbyist programmers it sets up for disappointment. Unless your definition of “pretty far” is “a small number of the part ones”, it’s simply not true.
In the programming world I feel like there's a lot of info "for beginners" and a lot of folks / activities for experts.
But that middle ground world is strange... a lot of it is a combo of filling in "basics" and also touching more advanced topics at the same time and the amount of content and just activities filling that in seems very low. I get it though, the middle ground skilled audience is a great mix of what they do or do not know / can or can not solve.
This is also true of a lot of other disciplines. I’ve been learning filmmaking lately (and editing, colour science, etc). There’s functionally infinite beginner friendly videos online on anything you can imagine. But very little content that slowly teaches the fundamentals, or presents intermediate skills. It’s all “Here’s 5 pieces of gear you need!” “One trick that will make your lighting better”. But that’s mostly it. There’s almost no intermediate stuff. No 3 hour videos explaining in detail how to set up an interview properly. Stuff like that.
Realize in anything, there are people who are much better than even the very best. The people doing official collegiate level competitive programming would find AoC problems pretty easy.
>The people doing official collegiate level competitive programming would find AoC problems pretty easy.
I used to program competitively and while that's the case for a lot of the early day problems, usually a few on the later days are pretty tough even by those standards. Don't take it from me, you can look at the finishing times over the years. I just looked at some today because I was going through the earlier years for fun and on Day 21/2023, 1 hour 20 minutes got you into the top 100. A lot of competitive programmers have streamed the challenges over the years and you see plenty of them struggle on occasion.
People just love to BS and brag, and it's quite harmful honestly because it makes beginner programmers feel much worse than they should.
The actual number is going to be higher as more people will have finished the puzzles since then, and many people may have finished all of the puzzles but split across more than one account.
Then again, I'm sure there's a reasonable number of people who have only completed certain puzzles because they found someone else's code on the AoC subreddit and ran that against their input, or got a huge hint from there without which they'd never solve it on their own. (To be clear, I don't mind the latter as it's just a trigger for someone to learn something they didn't know before, but just running someone else's code is not helping them if they don't dig into it further and understand how/why it works.)
There's definitely a certain specific set of knowledge areas that really helps solve AoC puzzles. It's a combination of classic Comp Sci theory (A*/SAT solvers, Dijkstra's algorithm, breadth/depth first searches, parsing, regex, string processing, data structures, dynamic programming, memoization, etc) and Mathematics (finite fields and modular arithmetic, Chinese Remainder Theorem, geometry, combinatorics, grids and coordinates, graph theory, etc).
Not many people have all those skills to the required level to find the majority of AoC "easy". There's no obvious common path to accruing this particular knowledge set. A traditional Comp Sci background may not provide all of the Mathematics required. A Mathematics background may leave you short on the Comp Sci theory front.
My own experience is unusual. I've got two separate bachelors degrees; one in Comp Sci and one in Mathematics with a 7 year gap between them, those degrees and 25+ years of doing software development as a job means I do find the vast majority of AoC quite easy, but not all of it, there are still some stinkers.
Being able to look at an AoC problem and think "There's some algorithm behind this, what is it?" is hugely helpful.
The "Slam Shuffle" problem (2019 day 22) was a classic example of this that sticks in my mind. The magnitude of the numbers involved in part 2 of that problem made it clear that a naive iteration approach was out of the question, so there had to be a more direct path to the answer.
As I write the code for part 1 of any problem I tend to think "What is the twist for part 2 going to be? How is Eric going to make it orders of magnitude harder?" Sometimes I even guess right, sometimes it's just plain evil.
Yeah, getting 250 or so stars is going to be straightforward, something most programmers with a couple of years of experience can probably manage. Then another 200 or so require some more specialized know-how (maybe some basic experience with parsers or making a simple virtual machine or recognizing a topology sort situation). Then probably the last 50 require something a bit more unusual. For me, I definitely have some trouble with any of the problems where modular inverses show up.
It's just bluffing, lying. People lie to make others think they're hot shit. It's like the guy in school who gets straight A's and says he never studies. Yeah I'll bet.
They... sort of are though? A year or two ago I just waited until the very last problem, which was min-cut. Anybody with a computer science education who has seen the prompt Proof. before should be able to tackle this one with some effort, guidance, and/or sufficient time. There are algorithms that don't even require all the high-falutin graph theory.
I don't mean to say my solution was good, nor was it performant in any way - it was not, I arrived at adjacency (linked) lists - but the problem is tractable to the well-equipped with sufficient headdesking.
Operative phrase being "a computer science education," as per GGP's point. Easy is relative. Let's not leave the bar on the floor, please, while LLMs are threatening to hoover up all the low hanging fruit.
You say in your comment: "Anybody with a computer science education ... should be able to tackle this one" which is directly opposed to what they advertise: "You don't need a computer science background to participate"
Got to agree. I'm even surprised at just how little progress many of my friends and ex-colleagues over the years make given that they hold down reasonable developer jobs.
My experience has been "little progress" is related to the fact that, while AoC is insanely fun, it always occurs during a time of year when I have the least free time.
Maybe when I was in college (if AoC had existed back then) I could have kept pace, but if part of your life is also running a household, then between wrapping up projects for work, finalizing various commitments I want wrapped up for the year, getting together with family and friends for various celebrations, and finally travel and/or preparing your own house for guests, I'm lucky if I have time to sit down with a cocktail and book the week before Christmas.
Seeing the format changed to 12 days makes me think this might be the first time in years I could seriously consider doing it (to completion).
Yep, the years I've made it the furthest have been around the 11-12 day mark. The inevitably life and kids and work get in the way and that's it for another year. Changing to a 12 day format is unlikely to affect me at all :)
In order to complete AoC you need more than just the ability to write code and solve problems. You need to find abstract problem-solving motivating. A lot of people don't see the point in competing for social capital (internet points) or expending time and energy on problems that won't live on after they've completed them.
I have no evidence to say this, but I'd guess a lot more people give up on AoC because they don't want to put in the time needed than give up because they're not capable of progressing.
I've never tried AoC prior but with other complex challenges I've tried without much research, there comes a point where it just makes more sense to start doing something on the backlog at home or a more specific challenge related to what I want to improve on.
I find the problem I have is once I get going on a problem I can't shake it out of my head. I end up lying in bed for hours pleading with my brain to let it go if I've not found the time to finish it during the crumbs of discretionary time in the day!
This type of problem has very little resemblance to the problems I solve professionally - I’m usually one level of abstraction up. If I run into something that requires anything even as complicated as a DAG it’s a good day.
I think this has a lot more to do with time commitment. Once the problems take more than ~1 hour I tend to stop because I have stuff to do, like a job that already involves coding.
Because like 80% of AoC problems require deep Computer science background and deeply specific algorithms almost nobody is using in their day to day work.
It's totally true. I was doing Advent of Code before I had any training or work in programming at all, and a lot of it can be done with just thinking through the problem logically and using basic problem solving. If you can reason a word problem into what it's asking, then break it down into steps, you're 90% of the way there.
I have a EE background not CS and haven't had too much trouble the last few years. I'm not aiming to be on the global leader board though. I think that with good problem solving skill, you should be able to push through the first 10 days most years. Some years were more front loaded though.
Agreed. I have a CS background and years of experience but I don't get very far with these. At some point it becomes a very large time commitment as well which I don't have
Advent of Code is one of the highlights of December for me.
It's sad, but inevitable, that the global leaderboard had to be pulled. It's also understandable that this year is just 12 days, so takes some pressure off.
If you've never done it before, I recommend it. Don't try and "win", just enjoy the problem solving and the whimsy.
That sounds healthy! But I would note that there's been interesting community discussions on reddit in past years, and I've gotten caught up in the "finish faster so I can go join the reddit discussion without spoilers". It turns out you can have amazing in-jokes about software puzzles and ascii art - but it also taught me in a very visceral way that even for "little" problems, building a visualizer (or making sure your data structures are easy-to-visualize) is startlingly helpful... also that it's nice to have people to commiserate with who got stuck in the same garden path/rathole that you did.
Same. I usually try to use it as the "real-world problem" I need for learning a new language. Is there anywhere that people have starter advice/ templates for various languages? I'd love to know
- install like this
- initialize a directory with this command
- here are the VSCode extensions (or whatever IDE) that are the bare minimum for the language
The "only" 12 days might be disappointing (but totally understandable), however I won't mourn the global leaderboard which always felt pointless to me (even without the llm, the fact that it depends on what time you did solved problems really made it impractical for most people to actually compete). Private leaderboards with people on your timezone are much nicer.
The global leaderboard was a great way to find really crazy good people and solutions however - I picked through a couple of these guys solutions and learned a few things. One guy had even written his own special purpose language mainly to make AoC problems fast - he was of course a compilers guy.
I think I’ll set up a local leaderboard with friends this year. I was never going to make it to the global board anyway but it is sad to see it go away.
It always seemed odd to me that a persistent minority of HN readers seem to have no interest in recreational programming/technical problems solving and perpetually ask "why should I care?"
It's totally fine not to care, but I can't quite get why you would then want to be an active member in a community of people who care about this stuff for no other reason than they fundamentally find it interesting.
I _love_ the Advent of Code. I actually (selfishly) love that it's only 12 days this year, because by about half way, I'm struggling to find the time to sit down and do the fantastic problems because of all the holiday activities IRL.
Finally that time of year again! I've been looking forward to this for a long time. I usually drop off about halfway anyways (finished day 13, 14 and 13 the previous 3 years), as that's when December gets too busy for me to enjoy it properly, so I personally don't mind the reduction in problems at all, really. I'm just happy we still have great puzzles to look forward to.
Taking out the public leaderboard makes sense imo. Even when you don't consider the LLM problem, the public Leaderboard's design was never really suited for anyone outside of the very specific short list of (US) timezones where competing for a quick solution was every feasible.
One thing I do think would be interesting is to see solution rate per hour block. It'd give an indication of how popular advent of code is across the world.
LLMs spoiled it, but it was fun to see the genuine top times. Watching competitive coders solve in real time is interesting (Youtube videos), and i wouldn't have discovered these without the leader board.
I am very happy that we get the advent of code again this year, however I have read the FAQ for the first time, and I must admit I am not sure I understand the reasoning behind this:
> If you're posting a code repository somewhere, please don't include parts of Advent of Code like the puzzle text or your inputs.
The text I get, but the inputs? Well, I will comply, since I am getting a very nice thing for (almost) free, so it is polite to respect the wishes here, but since I commit the inputs (you know, since I want to be able to run tests) into the repository, it is bit of a shame the repo must be private.
If enough inputs are available online, someone can presumably collect them and clone the entire project without having access to the puzzle input generation code, which is the "secret sauce" of the project.
Are you saying that we all have different inputs? I've never actually checked that, but I don't think it's true. My colleagues have gotten stuck in the same places and have mentioned aspects of puzzles and input characteristics and never spoken past each other. I feel like if we had different inputs we'd have noticed by now.
It depends on the individual problem, some have a smaller problem space than others so unique inputs would be tricky for everyone.
But there are enough possible inputs that most people shouldn't come across anyone else with exactly the same input.
Part of the reason why AoC is so time consuming for Eric is that not only does he design the puzzles, he also generates the inputs programmatically, which he then feeds through his own solver(s) to ensure correctness. There is a team of beta testers that work for months ahead of the contest to ensure things go smoothly.
(The adventofcode subreddit has a lot more info on this.)
He puts together multiple inputs for each day, but they do repeat over users. There's a chance you and your colleagues have the same inputs.
He's also described, over the years, his process of making the inputs. Related to your comment, he tries to make sure that there are no features of some inputs that make the problem especially hard or easy compared to the other inputs. Look at some of the math ones, a few tricks work most of the time (but not every time). Let's say after some processing you get three numbers and the solution is their LCM, that will probably be true of every input, not just coincidental, even if it's not an inherent property of the problem itself.
I don't know how much they "stand out" because their frequency makes it so that the optimal global leaderboard strat is often to just try something dumb and see if you win input roulette.
if we just look at the last three puzzles: day 23 last year, for example, admitted the greedy solution but only for some inputs. greedy clearly shouldn't work (shuffling the vertices in a file that admits it causes it to fail).
I have a solve group that calls it "Advent of Input Roulette" because (back when there was a global leaderboard) you can definitely get a better expected score by just assuming your input is weak in structural ways.
I don't push my solutions publicly, but I made an input downloader so you can input your cookie from your browser and load (and cache) the inputs rather than commit them.
This is not surprising at all, to me. Just commit the example input and write your test cases against that. In a nicely structured solution, this works beautifully with example style tests, like python or rust doctests, or even running jsdoc @example stanzas as tests with e.g. the @linus/testy module.
The example input(s) is part of the "text", and so committing it is also not allowed. I guess I could craft my own example inputs and commit those, but that exceed the level of effort I am willing to expend trying to publish repository no one will likely ever read. :)
The part I enjoy the most is after figuring out a solution for myself is seeing what others did on Reddit or among a small group of friends who also does it. We often have slightly different solutions or realize one of our solutions worked "by accident" ignoring some side case that didn't appear in our particular input. That's really the fun of it imho.
I had never heard of this before I saw something announcing this years adventure. It looked interesting so I gave it a try, doing 2024. I had a blast. In concept, its very similar to the Euler Project but oriented more towards programming rather than being heavily mathematical. Like Euler, the first part is typically trivial while part 2 can put the hammer down and make you think to devise an approach that can arrive at a solution in milliseconds rather than the death of the universe.
Python is extremely suitable for these kind of problems. C++ is also often used, especially by competitive programmers.
Which "non-mainstream" or even obscure languages are also well suited for AoC? Please list your weapon of choice and a short statement why it's well suited (not why you like it, why it's good for AoC).
My favourite "non-mainstream" languages are, depending on my mood at the time, either:
- Array languages such as K or Uiua. Why they're good for AoC: Great for showing off, no-one else can read your solution (including yourself a few days later), good for earlier days that might not feel as challenging
- Raw-dogging it by creating a Game Boy ROM in ASM (for the Game Boy's 'Z80-ish' Sharp LR35902). Why it's good for AoC: All of the above, you've got too much free time on your hands
Just kidding, I use Clojure or Python, and you can pry itertools from my cold, dead hands.
This year I've been working on a bytecode compiler for it, which has been a nice challenge. :)
When I want to get on the leaderboard, though, I use Go. I definitely felt a bit handicapped by the extra typing and lack of 'import solution' (compared to Python), but with an ever-growing 'utils' package and Go's fast compile times, you can still be competitive. I am very proud of my 1st place finish on Day 19 2022, and I credit it to Go's execution speed, which made my brute-force-with-heuristics approach just fast enough to be viable.
yep, https://github.com/lukechampine/slouch. Fair warning, it's some of the messiest code I've ever written (or at least, posted online). Hoping to clean it up a bit once the bytecode stuff is production-ready.
I like to use Haskell, because parser combinators usually make the input parsing aspect of the puzzles extremely straightforward. In addition, the focus of the language on laziness and recursion can lead to some very concise yet idiomatic solutions.
Example: find the first example for when this "game of life" variant has more than 1000 cells in the "alive" state.
Solution: generate infinite list of all states and iterate over them until you find one with >= 1000 alive cells.
let allStates = iterate nextState beginState # infinite list of consecutive solutions
let solution = head $ dropWhile (\currentState -> numAliveCells currentState < 1000) allStates
Yes, there are some cool solutions using laziness that aren't immediately obvious. For example, in 2015 and 2024 there were problems involving circuits of gates that were elegantly solved using the Löb function:
I actually plan on doing this year in Gleam, because I did the last 5 years in Haskell and want to learn a new language this year. My solutions for last year are on github at https://github.com/WJWH/aoc2024 though, if you're interested.
Haskell values are immutable, so it creates a new state on each iteration. Since most of these "game of life" type problems need to touch every cell in the simulation multiple times anyway, building a new value is not really that much more expensive than mutating in place. The Haskell GC is heavily optimized for quickly allocating and collecting short-lived objects anyway.
But yeah, if you're looking to solve the puzzle in under a microsecond you probably want something like Rust or C and keep all the data in L1 cache like some people do. If solving it in under a millisecond is still good enough, Haskell is fine.
Fun fact about Game of Life is that the leading algorithm, HashLife[1], uses immutable data structures. It's quite well suited to functional languages, and was in fact originally implemented in Lisp by Bill Gosper.
* The expressive syntax helps keep the solutions short.
* It has extensive standard library with tons of handy methods for AoC style problems: Enumerable#each_cons, Enumerable#each_slice, Array#transpose, Array#permutation, ...
* The bundled "prime" gem (for generating primes, checking primality, and prime factorization) comes in handy for at least a few of problems each year.
* The tools for parsing inputs and string manipulation are a bit more ergonomic than what you get even in Python: first class regular expression syntax, String#scan, String#[], Regexp::union, ...
* You can easily build your solution step-by-step by chaining method calls. I would typically start with `p File.readlines("input.txt")` and keep executing the script after adding each new method call so I can inspect the intermediate results.
It has many of the required structures (hashes/maps, ad hoc structs, etc) and is great for knocking up a rough and ready prototype of something. It's also quick to write (but often unforgiving).
I can also produce a solution for pretty much every problem in AoC without needing to download a single separate Perl module.
On the negative side there are copious footguns available in Perl.
(Note that if I knew Python as well as I knew Perl I'd almost certainly use Python as a starting point.)
I also try and produce a Go and a C solution for each day too:
* The Go solution is generally a rewrite of the initial Perl solution but doing things "properly" and correcting a lot of the assumptions and hacks that I made in the Perl code. Plus some of those new fangled "test" things.
* The C solution is a useful reminder of how much "fun" things can be in a language that lacks built-in structures like hashes/maps, etc.
I used my homemade shell language last year, called elk shell. It worked surprisingly well, better than other languages I've tried, because unlike other shell languages it is just a regular general purpose scripting language with a standard library that can also run programs with the same syntax as function calls.
I use python at work but code these in kotlin. The stdlib for lists is very comprehensive, and the syntax is sweet. So easy to make a chain of map, filter and some reduction or nice util (foldr, zipwithnext, windowed etc). Flows very well with my thought process, where in python I feel list comprehensions are the wrong order, lambdas are weak etc.
I write most as pure functional/immutable code unless a problem calls for speed. And with extension functions I've made over the years and a small library (like 2d vectors or grid utils) it's quite nice to work with. Like, if I have a 2D list (List<List<E>>), and my 2d vec, like a = IntVec(5,3), I can do myList[a] and get the element due to an operator overload extension on list-lists.
and with my own utils and extension functions added over years of competitive programming (like it's very fluent
Go is strong. You get something where writing a solution doesn't take too much time, you get a type system, you can brute-force problems, and the usual mind-numbing boring data-manipulation handling fits well into the standard tools.
OCaml is strong too. Stellar type system, fast execution and sane semantics unlike like 99% of all programming languages. If you want to create elegant solutions to problems, it's a good language.
For both, I recommend coming prepared. Set up a scaffold and create a toolbox which matches the typical problems you see in AoC. There's bound to be a 2d grid among the problems, and you need an implementation. If it can handle out-of-bounds access gracefully, things are often much easier, and so on. You don't want to hammer the head against the wall not solving the problem, but solving parsing problems. Having a combinator-parser library already in the project will help, for instance.
Any recommendations for Go? Traditionally I've gone for Python or Clojure with an 'only builtins or things I add myself' approach (e.g. no NetworkX), but I've been keen to try doing a year in Go however was a bit put off by the verbosity of the parsing and not wanting to get caught spending more time futzing with input lines and err.
Naturally later problems get more puzzle-heavy so the ratio of input-handling to puzzle-solving code changes, but it seemed a bit off putting for early days, and while I like a builtins-only approach it seems like the input handling would really benefit from a 'parse don't validate' type approach (goparsec?).
It's usually easy enough for Go you can just roll your own for the problems at hand. It won't be as elegant as having access to a combinator-parser, but all of the AoC problems aren't parsing problems.
Once you have something which can "load \n seperated numbers into array/slice" you are mostly set for the first few days. Go has verbosity. You can't really get around that.
The key thing in typed languages are to cook up the right data structures. In something without a type system, you can just wing things and work with a mess of dictionaries and lists. But trying to do the same in a typed language is just going to be uphill as you don't have the tools to manipulate the mess.
Historically, the problems has had some inter-linkage. If you built something day 3, then it's often used day 4-6 as well. Hence, you can win by spending a bit more time on elegance at day 3, and that makes the work at day 4-6 easier.
Mind you, if you just want to LLM your way through, then this doesn't matter since generating the same piece of code every day is easier. But obviously, this won't scale.
> It won't be as elegant as having access to a combinator-parser, but all of the AoC problems aren't parsing problems.
Yeah, this is essentially it for me. While it might not be a 'type-safe and correct regarding error handling' approach with Python, part of the interest of the AoC puzzles is the ability to approach them as 'almost pure' programs - no files except for puzzle input and output, no awkward areas like date time handling (usually), absolutely zero frameworks required.
> you can just wing things and work with a mess of dictionaries and lists.
Checks previous years type-hinted solutions with map[tuple[int, int], list[int]]
Yeah...
> but all of the AoC problems aren't parsing problems.
I'd say for the first ten years at least the first ten-ish days are 90% parsing and 10% solving ;) But yes, I agree, and maybe I'm worrying over a few extra visible err's in the code that I shouldn't be.
> if you just want to LLM your way through
Totally fair point if I constrain LLM usage to input handling and the things that I already know that I know how to do but don't want to type, although I've always quite liked being able to treat each day as an independent problem with no bootstrapping of any code, no 'custom AoC library', and just the minimal program required to solve the problem.
This is what is great about it, the community posting hyper-creative (sometimes cursed) solutions for fun! I usually use AoC to try out a new language and that has been fun for me over the years.
AoC has been a highlight of the season for me since the beginning in 2015. I experimented with many languages over the years, zeroing in on Haskell, then Miranda as my language of choice. Finally, I decided to write my own language to do AoC, and created Admiran (based upon Miranda and other lazy, pure, functional languages) with its own self-hosted compiler and library of functional data structures that are useful in AoC puzzles:
I've had a lot of fun using Nim for AOC for many years. Once you're familiar with the language and std lib, its almost as fast to write as python, but much faster (Nim compiles to C, which then gets compiled to your executable). This means that sometimes, if your solution isn't perfect in terms of algorithmic complexity, waiting a few minutes can still save you (waiting 5 mins for your slow Nim code is OK, waiting 5 hours for your slow Python isn't really, for me). Of course all problems have a solution that can run in seconds even in Python, but sometimes it's not the one I figure out first try.
Downsides: The debugging situation is pretty bad (hope you like printf debugging), smaller community means smaller package ecosystem and fewer reference solutions to look up if you're stuck or looking for interesting alternative ideas after solving a problem on your own, but there's still quality stuff out there.
Though personally I'm thinking of trying Go this year, just for fun and learning something new.
Edit: also a static type system can save you from a few stupid bugs that you then spend 15 minutes tracking down because you added a "15" to your list without converting it to an int first or something like that.
I've done AoC on what I call "hard mode", where I do the solutions in a language I designed and implemented myself. It's not because the language is particularly suited to AoC in any particular way, but it gives me confidence that my language can be used to solve real problems.
I’ve always used AoC as my jump-off point for new languages. I was thinking about using Gleam this year! I wish I had more profound reasons, but the pipeline syntax is intriguing and I just want to give it a whirl.
I tried AoC out one year with the Wolfram language, which sounds insane now, but back then it was just a "seemed like the thing to do at the time" and I'm glad I did it.
For me (and most of my friends/coworkers) the point of AoC was to write in some language that you always wanted to learn but never had the chance. The AoC problems tend to be excellent material for a crash course in a new PL because they cover a range of common programming tasks.
Historically good candidates are:
- Rust (despite it's popularity, I know a lot of devs who haven't had time to play with it).
- Haskell (though today I'd try Lean4)
- Racket/Common Lisp/Other scheme lisp you haven't tried
- Erlang/Elixir (probably my choice this year)
- Prolog
Especially for those langs that people typically dabble in but never get a change to write non-trivial software in (Haskell, Prolog, Racket) AoC is fantastic for really getting a feel for the language.
I am going to try and stick with Prolog as much as I can this year. Plenty of problems involve a lot of parsing and searching, both could be expressed declaratively in Prolog and it just works (though you do have to keep the execution model in mind).
I used MATLAB last year while I was re-learning it for work. It did okay, but we didn't have a license for the Image Processing Toolbox, which has a boatload of tools for the grid based problems.
With both AoC and Project Euler I like seeing how fast I can get my solution to run with SBCL. Finding all palindromic primes below a million in less than a second is pretty neat.
If I remember correctly, one of the competitive programming experts from the global leaderboard made his own language, specifically tailored to help solve AoC problems:
(post title: "Designing a Programming Language to Speedrun Advent of Code", but starts off "The title is clickbait. I did not design and implement a programming language for the sole or even primary purpose of leaderboarding on Advent of Code. It just turned out that the programming language I was working on fit the task remarkably well.")
> I solve and write a lot of puzzlehunts, and I wanted a better programming language to use to search word lists for words satisfying unusual constraints, such as, “Find all ten-letter words that contain each of the letters A, B, and C exactly once and that have the ninth letter K.”1 I have a folder of ten-line scripts of this kind, mostly Python, and I thought there was surely a better way to do this.
I'll chose to remember it was designed for AoC :-D
I've been using Elixir, which has been wonderful, mostly because of how amazing the built in `Enum` library is for working on lists and maps (since the majority of AoC problems are list / map processing problems, at least for the first while)
Enum really does feel like a superpower sometimes. I’ll knock out some loop and then spend a few mins with h Enum.<tab> and realise it could’ve been one or two Enum functions.
For some grid based problems, I think spreadsheets are very powerful and under-appreciated.
The spatial and functional problem solving makes it easy to reason about how a single cell is calculated. Then simply apply that logic to all cells to come up with the solution.
I usually do it with ruby with is well suite just like python, but last year I did it with Elixir.
I think it lends itself very well to the problem set, the language is very expressive, the standard library is extensive, you can solve most things functionally with no state at all. Yet, you can use global state for things like memoization without having to rewrite all your functions so that's nice too.
Elixir Livebook is my tool of choice for Advent of Code. The language is well-suited for the puzzles, I can write some Markdown if I need to record some algebra or my thought process, the notebook format serves as a REPL for instant code testing, and if the solution doesn't fit neatly into an executable form, I can write up my manual steps as well.
I've done some of the problems in R. Vectorized-by-default can avoid a lot of boilerplate. And for problems that aren't in R's happy path, I learn how to optimize in the language. And then I try to make those optimizations non-hideous to read.
Another vote for Haskell. It’s fun and the parsing bit is easy. I do struggle with some of the 2d map style questions which are simpler in a mutable 2d array in c++. It’s sometimes hard to write throwaway code in Haskell!
IMO it's maybe the best suited language to AoC.
You can write it even faster than Python, has a very terse syntax and great numerical performance for the few challenges where that matters.
I respect the effort going into making Advent of Code but with the very heavy emphasis on string parsing, I'm not convinced it's a good way to learn most languages.
Most problems are 80%-90% massaging the input with a little data modeling which you might have to rethink for the second part and algorithms used to play a significant role only in the last few days.
That heavily favours languages which make manipulating string effortless and have very permissive data structures like Python dict or JS objects.
You are right. The exercises are heavy in one area. Still, for starting in a new language can be helpful: you have to do in/out with files. Data structures, and you will be using all flow control. So you will not be an ace, but can help to get started.
I know people who make some arbitrary extra restriction, like “no library at all” which can help to learn the basics of a language.
The downside I see is that suddenly you are solving algorithmic problems, which some times are bot trivial, and at the same time struggling with a new language.
That's a hard agree and a reason why anyone trying to learn Haskell, OCaml, or other language with minimal/"batteries depleted" stdlib will suffer.
Sure Haskell comes packaged with parser combinators, but a new user having to juggle immutability, IO and monads all at once at the same time will be almost certainly impossible.
Maybe not learning a new language from the ground up, but I think it is good training to "just write" within the language. A daily or twice-daily interaction. Setting up projects, doing the basic stuff to get things running, and reading up on the standard library.
Having smaller problems makes it possible to find multiple solutions as well.
I never liked the global leaderboard since I was usually asleep when the puzzles were released. I likely never would have had a competitive time anyway.
I never had any hope or interest to compete in the leaderboard, but I found it fun to check it out, see times, time differences ("omg 1 min for part 1 and 6 for part 2"), lookup the names of the leaders to check if they have something public about their solutions, etc. One time I even ran into the name of an old friend so it was a good excuse to say hi.
I love advent of code, and I look forward to it every year!
I've never stressed out about the leaderboard. Ive always taken it as an opportunity to learn a new language, or brush up on my skills.
In my day-to-day job, I rarely need to bootstrap a project from scratch, implement a depth first search of a graph, or experiment with new language features.
It's for reasons like these that I look forward to this every year. For me it's a great chance to sharpen the tools in my toolbox.
Some part of me would love a job that was effectively solving AoC type problems all the time, but then I'd probably burn out pretty quickly if that's all I ever had to do.
Sometimes it's nice to have a break by writing a load of error handling, system architecture documentation, test cases, etc.
> For me it's a great chance to sharpen the tools in my toolbox.
That's a good way of putting it.
My way of taking it a step further and honing my AoC solutions is to make them more efficient whilst ensuring they are still easy to follow, and to make sure they work on as many different inputs as possible (to ensure I'm not solving a specific instance based on my personal input). I keep improving and chipping away at the previous years problems in the 11 months between Decembers.
I find it interesting how many sponsors run their own "advent of <x>". So far I've seen "cloud", "FPGA", and a "cyber security" one in the sponsors pages (although that last one is one I remember from last year).
I'm also surprised there are a few Dutch language sponsors. Do these show up for everyone or is there some kind of region filtering applied to the sponsors shown?
I love Advent of Code! I have used previous years' problems for my guest lectures to Computer Science students and they have all enjoyed those more than a traditional algorithmic lecture.
A little sad that there are fewer puzzles. But also a glad that I'll see my wife and maybe even go outside during the second half of December this year.
Advent of code is such a fantastic event. I am honestly glad it's 12 days this year, primarily because I would only ever get to day 13 or 14 before it would take me an entire day to finish the puzzles! This would be my fourth year doing AoC. Looking forward to it :)
I plan on doing this year in C++ because I have never worked with it and AoC is always a good excuse to learn a new language. My college exams just got over, so I have a ton of free time.
BTW the page mentions Alternate Styles, which is an obscure feature in firefox (View -> Page Styles). If you try it out, you will probably run into [0] and not be able to reset the style. The workaround is to open the page in a different tab, which will go back to the default style.
I'm actually pleasantly surprised to see a 2025 edition, last year being the 10th anniversary and the LLM situation with the leaderboard were solid indications that it would have been a great time to wrap it up and let somebody else carry the torch.
It's only going to be 12 problems rather than 24 this year and there isn't going to be a gloabl leaderboard, but I'm still glad we get to take part in this fun Christmas season tradition, and I'm thankful for all those who put in their free time so that we can get to enjoy the problems. It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect, I've always just enjoyed the puzzles, so as far as I'm concerned nothing was really lost.
>It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect
Is this an unpopular stance? Out of a dozen people I know that did/do AoC every year, only one was trying to compete. Everyone else did it for fun, to learn new languages or concepts, to practice coding, etc.
Maybe it helps that, because of timezones, in Europe you need to be really dedicated to play for a win.
No, it's not. At most 200 people could end up on the global leaderboard, and there are tens of thousands of people who participate most days (though it drops off by the end, it's over 100k reliably for the first day). The vast majority of participants are not there for the leaderboard. If you care about being competing, there are always private leaderboards.
A couple of the Slack/Discord groups I’m in do a local leaderboard with friends. It’s fun to do with a trusted group of people who are all in it for fun.
I'm also in a few local leaderboards, but I'm not "really" competing, it's more of a fun group thing.
Premises:
(i) I love Advent of Code and I'm grateful for its continuing existence in whatever form its creators feel like it's best for themselves and the community;
(ii) none of what follows is a request, let alone a demand, for anything to change;
(iii) what follows is just the opinion of some random guy on the Internet.
I have a lot of experience with competitions (although more on the math side than on the programming side), and I've been involved essentially since I was in high school, as a contestant, coach, problem writer, organizer, moving tables, etc.
In my opinion Advent of Code simply isn't a good competition:
- You need to be available for many days in a row for 15 minutes at a very specific time.
- The problems are too easy.
- There is no time/memory check: you can write ooga-booga code and still pass.
- Some problems require weird parsing.
- Some problems are pure implementation challenges.
- The AoC guy loves recursive descent parsers way too much.
- A lot of problems are underspecified (you can make assumptions not in the problem statement).
- Some problems require manual input inspection.
To reiterate once again: I am not saying that any of this needs to change. Many of the things that make Advent of Code a bad competition are what make it an excellent, fun, memorable "Christmas group thing". Coming back every day creates community and gives people time to discuss the problems. Problems being easy and not requiring specific time complexities to be accepted make the event accessible. Problems not being straight algorithmic challenges add welcome variety.
I like doing competitions but Advent of Code has always felt more like a cozy problem solving festival, I never cared too much for the competitive aspect, local or global.
There are definitely some problems that have an indirect time/memory check, in that if you don't have a right-enough algorithm, your program will never finish.
> - The AoC guy loves recursive descent parsers way too much.
The vast majority (though not all) of the inputs can be parsed with regex or no real parsing at all. I actually can't think of a day that needed anything like recursive descent parsing.
I too like the simple nature. If you care about highly performant code, you can always challenge yourself (I got into measuring timing in the second season I participated). Personally I prefer a world like this. Not everyone should have to compete on every detail (I know you stated that your points aren’t demands, I’m just pointing out my own worldview). For any given thing, there will naturally be people that are OK with “good enough”, and people who are interested to take it as far as they can. It’s nice that we can all still participate in this.
One could probably build a separate service that provides a leaderboard for solution runtimes.
I agree that it’s more of a cozy activity than a hardcore competition, that’s what I appreciate about it most.
> The AoC guy loves recursive descent parsers way too much.
LOL!!
I agreed with a lot of what you wrote, but also a lot of us strive for beautiful solutions regardless of time/memory bounds.
In fact, I’m (kind of) tired of leetcode flagging me for one ultra special worst-case scenario. I enjoy writing something that looks good and enjoying the success.
(Not that it’s bad to find out I missed an optimization in the implementation, but… it feels like a lot of details sometimes.)
Do you know of anything like AoC but that feels less contrived? I often spend the most time understanding the problem requirements because they are so arbitrary - like the worst kind of boardgame! Maybe I should go pick up some OSS tickets...
Being contrived, with puns or other weirdness is kinda on par for this kind of problems. Almost every programming competition I've ever been to have those kind of jokes.
But the Kattis website is great. The program runs on their server without you getting to know the input (you just get right/wrong back), so a bit different. But also then gives you memory and time constraints which you for the more difficult problems must find your way out of.
Take a look at Everybody Codes. It occurs in November instead of December, so this year is wrapping up. Like AoC, it is story based but maybe you'll find the problem extraction more to your liking.
I did a post [0] about this last year, and vanilla LLMs didn’t do nearly as well as I’d expected on advent of code, though I’d be curious to try this again with Claude code and codex
> LLMs, and especially coding focused models, have come a very long way in the past year.
I see people assert this all over the place, but personally I have decreased my usage of LLMs in the last year. During this change I’ve also increasingly developed the reputation of “the guy who can get things shipped” in my company.
I still use LLMs, and likely always will, but I no longer let them do the bulk of the work and have benefited from it.
Last April I asked Claude Sonnet 3.7 to solve AoC 2024 day 3 in x86-64 assembler and it one-shotted solutions for part 1 and 2(!)
It's true this was 4 months after AoC 2024 was out, so it may have been trained on the answer, but I think that's way too soon.
Day 3 in 2024 isn't a Math Olympiad tier problem or anything but it seems novel enough, and my prior experience with LLMs were that they were absolutely atrocious at assembler.
I know some folks were disappointed with their being 12 puzzles instead of 24 this year, but I never have time to finish anyway so it makes no difference to me lol
Exactly. I have always taken AoC as fun and time to learn. But there is so much going on during December and I do not enjoy doing more than one puzzle a day (it feels like hard work instead of fun). I usually spend time on weekends with kids and family and I am not willing to solve more puzzles during week days so I am falling behind all the time. My plan was always to finish last year puzzles to enjoy more interesting ones but it always felt wrong. So I hope I will have time to finish everything this year :-) But I feel pain from people with enough free time to go full on. I would love to be one of them but there is so much going on everywhere that I have to split my time. Sorry programming world and especially computers :-D
Eliminating the leaderboard might help. By measuring it as a race, it becomes a race, and now the goal is the metric.
Maybe just have a cool advent calendar thingy like a digital tree that gains an ornament for each day you complete. Each ornament can be themed for each puzzle.
Of course I hope it goes without saying that the creator(s) can do it however they want and we’re nothing but richer for it existing.
That 'digital tree' idea is similar to how AoC has always worked. There's a theme-appropriate ASCII graphic on the problem page that gains color and effects as you complete problems. It's not always a tree, but it was in 2015 (the first year), and in several other years at least one tree is visible. https://adventofcode.com/2015
I've ignored the leaderboard for its entire existence, as the puzzles release at something like 4AM-5AM in my timezone; there's no point getting up 4 hours early, or staying awake 4 hours after bedtime, for some points on the internet.
Instead, getting gold stars for solving the puzzles is incentive enough, and can be done as a relaxing thing in the morning.
No matter what you do, as the puzzles get harder, you won't solve them in a day (or even a lifetime) if you don't come up with good algorithms/methods/heuristics.
I disagree. Having a leaderboard also leaks into the puzzle design. So the experience is different, even if you choose to ignore the leaderboard as a participant.
That’s also completely true and something I often say about gaming. You don’t like achievements? Just don’t do them. Your enjoyment shouldn’t be a function of how others interact with the product.
I never, in all the years of participating in AoC did take a look at the global leaderboard.
Even before LLMs I knew it was filled with with results faster then you can blink.
So some of us, from gut feeling the vast majority, it was always just for fun. Usually I spent at least until March to finish as much as I did in every year.
Oh, i’m quite sure it does. In fact, it’s a central thing in so much of psychology. The only difference is how you get there. Some people can just ignore and others take more effort.
I stopped staying up until midnight for the new problem set to be released and instead would do them in the afternoon. Even though I could compare my time to the leaderboard, simply not having the possibility of being on the board removed most of the comparison anxiety.
While part of the fun is doing the daily tasks with your friends, you can still access the previous years and their challenges if you want to continue after advent!
In the IEEEXTREME university programming competition there are ~10k participating teams.
Our university has a quite strong Competitive Programming program and the best teams usually rank in the top 100. Last year a team ranked 30 and it's wasn't even our strongest team (which didn't participate)
This year none of our teams was able to get in the top 1000. I would estimate close to 99% of the teams in the Top 1000 were using LLMs.
Last year they didn't seem to help much, but this year they rendered the competition pointless.
I've read blogs/seen videos of people who got in the AOC global leaderboard last year without using LLMs, but I think this year it wouldn't be possible at all.
Man, those people using LLMs in competitive programming ... where's the fun in that? I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
I’m a very casual gamer but even I run into obvious cheaters in any popular online game all the time.
Cheating is rampant anywhere there’s an online competition. The cheaters don’t care about respecting others, they get a thrill out of getting a lot of points against other people who are trying to compete.
Even in the real world, my runner friends always have stories about people getting caught cutting trails and all of the lengths their running organizations have to go through now to catch cheaters because it’s so common.
The thing about cheaters in a large competition is that it doesn’t take many to crowd out the leaderboard, because the leaderboard is where they get selected out. If there are 1000 teams competing and only 1% cheat, that 1% could still fill the top 10.
Yeah. I was happy to see this called out in their /about
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
> I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
reminds me of something I read in "I’m a high schooler. AI is demolishing my education." [0,1] emphasis added:
> During my sophomore year, I participated in my school’s debate team. I was excited to have a space outside the classroom where creativity, critical thinking, and intellectual rigor were valued and sharpened. I love the rush of building arguments from scratch. ChatGPT was released back in 2022, when I was a freshman, but the debate team weathered that first year without being overly influenced by the technology—at least as far as I could tell. But soon, AI took hold there as well. Many students avoided the technology and still stand against it, but it was impossible to ignore what we saw at competitions: chatbots being used for research and to construct arguments between rounds.
high school debate used to be an extracurricular thing students could do for fun. now they're using chatbots in order to generate arguments that the students can just regurgitate.
the end state of this seems like a variation on Dead Internet Theory - Team A is arguing the "pro" side of some issue, Team B is arguing the "con" side, but it's just an LLM generating talking points for both sides and the humans acting as mouthpieces. it still looks like a "debate" to an outside observer, but all the critical thinking has been stripped away.
> high school debate used to be an extracurricular thing students could do for fun.
High school debate has been ruthless for a long time, even before AI. There has been a rise in the use of techniques designed to abuse the rules and derail arguments for several years. In some regions, debates have become more about teams leveraging the rules and technicalities against their opponents than organically trying to debate a subject.
It sucks that the fun is being sucked out of debate, but I guess a silver lining is that the abuse of these tactics helps everyone understand that winning debates isn't about being correct, it's about being a good debater. And a similar principle can be applied to the application of law and public policy as well.
Why is that strange? Competitive programming, as the name suggests, is about competing. If the rules allow that, not using LLM is actually more like running tour de France.
If the rules don't allow that and yet people do then well, you need online qualifiers and then onsite finals to pick the real winners. Which was already necessary, because there are many other ways to cheat (like having more people than allowed in the team).
I'm a bit surprised you can honestly believe that a competition of humans isn't somehow different if allowed to use solution-generators. Like using a calculator in an arithmetic competition. Really?
It's not much different than outlawing performance enhancing drugs. Or aimbots in competitive gaming. The point is to see what the limits of human performance are.
If an alien race came along and said "you will all die unless you beat us in the IEEE programming competition", I would be all for LLM use. Like if they challenged us to Go, I think we'd probably / certainly use AI. Or chess - yeah, we'd be dumb to not use game solvers for this.
But that's not in the spirit of the competition if it's University of Michigan's use of Claude vs MIT's use of Claude vs ....
Imagine if the word "competition" meant "anything goes" automatically.
It's a different kind of fun. Just like doing math problems on paper can be fun, or writing code to do the math can be fun, or getting AI to write the code to do the math can be fun.
They're just different types of fun. The problem is if one type of fun is ruined by another.
It can be a matter of values from your upbringing or immediate environment. There are plenty of places where they value the results, not the journey, and they think that people who avoid cheating are chumps. Think about that: you are in a situation where you just want to do things for fun but everyone around you will disrespect you for not taking the easy way out.
Weirdly I feel lot more accepting of LLMs in this type of environment than in making actual products. Point is doing things fast and correct enough. So in someways LLM is just one more tool.
With products I want actual correctness. And not something thrown away.
We’re starting to get to a point where the ai can generate better code than your average developer, though. Maybe not a great developer yet, but a lot of products are written by average developers.
Given what I understand about the nature of competitive programming competitions, using an LLM seems kind of like using a calculator in an arithmetic competition (if such a thing existed) or a dictionary in a spelling bee.
These contests are about memorizing common patterns and banging out code quickly. Outsourcing that to an LLM defeats the point. You can say it's a stupid contest format, and that's fine.
(I did a couple of these in college, though we didn't practice outside of competition so we weren't especially good at it.)
In 1997, Deep Blue beat Gary Kasparov, the world chess champion. Today, chess grandmasters stand no chance against Stockfish, a chess engine that can run on a cheap phone. Yet chess remains super popular and competitive today, and while there are occasional scandals, cheating seems to be mostly prevented.
I don’t see why competitive debate or programming would be different. (But I understand why a fair global leaderboard for AOC is no longer feasible).
When I did competitions like these at uni (~10-15 years ago), we all used some thin-clients in the computer lab where the only webpages one could access were those allowed by the competition (mainly the submission portal). And then some admin/organizers would feed us and make sure people didn't cheat. Maybe we need to get back to that setup, heh.
Serious in-person competitions like ICPC are still effective against cheating. The first phase happens in a limited number of venues and the computers run a custom OS without internet access. There are many people watching so competitors don't user their phones, etc.
The Regional Finals and World Finals are in a single venue with a very controlled environment. Just like the IOI and other major competitions.
National High School Olympiads have been dealing with bigger issues because there are too many participants in the first few phases, and usually the schools themselves host the exams. There has been rampant cheating. In my country I believe the organization has resorted to manually reviewing all submissions, but I can only see this getting increasingly less effective.
This year the Canadian Computing Competition didn't officially release the final results, which for me is the best solution:
> Normally, official results from the CCC would be released shortly after the contest. For this year’s
contest, however, we will not be releasing official results. The reason for this is the significant
number of students who violated the CCC Rules. In particular, it is clear that many students
submitted code that they did not write themselves, relying instead on forbidden external help.
As such, the reliability of “ranking” students would neither be equitable, fair, or accurate.
Online competitions are just hopeless. AtCoder and Codeforces have rules against AI but no way to enforce them. A minimally competent cheater is impossible to detect. Meta Hacker Cup has a long history and is backed by a large company, but had its leaderboard crowded by cheaters this year.
Oof. I had a great time cracking the top 100 of Advent of Code back in 2020. Bittersweet to know that I got in while it was still a fun challenge for humans.
For those who think this is a typo, uiua [1] (pronounced "wee-wuh") is a stack-based array programming language.
I solved a few problems with it last year, and it is amazing how compact the solutions are. It also messes with your head, and the community surrounding it is interesting. Highly recommended.
Excited to see AOC back and I think it was a solid idea to get rid of the global leaderboard.
We (Depot) are sponsoring this year and have a private leaderboard [0]. We’re donating $1k/each for the top five finishers to a charity of their choice.
Isn't a publicly advertised private leaderboard - especially with cash prizes - against the new guidance? Certainly the spirit of the guidance.
>What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I've made it so you can share a read-only view of your private leaderboard. *Please don't use this feature or data to create a "new" global leaderboard.*)
i don't think it should be a charity of their choice. i think it should have to be one of the top 5 most reputable charities in the world, like doctors without borders or salvation army.
You could, but you shouldn't have to. If you want to sign up for XYZ, you need to sign up for BigCorp, you need to add your phone number to verify your account, etc.
The "etc" is pretty important here. You can log in using Reddit, and you can create a random throwaway Reddit account without filling in any other details (no email address or phone number required).
I believe they no longer allow new accounts without an email address.
It used to be that reddit had a user creation screen that looked like you needed to input an email address, but you could actually just click "Next" to skip it.
The last time I had cause to make a reddit account, they no longer allowed this.
Having done my own auth I get why they do it this way. LLMs are already a massive problem with AoC, I imagine an anonymous endpoint to validate solutions would be even worse.
Having done auth myself, I can also understand why auth is being externalised like this. The site was flooded with bots and scrapers long before LLMs gained relevance and adding all the CAPTCHAs and responding to the "why are you blocking my shady CGNAT ISP when I'm one of the good ones" complaints is just not worth it. Let some company with the right expertise deal with all of that bullshit.
I'd wish the site would have more login options, though. It's a tough nut to crack; pick a small, independent oauth login service not under control of a bit tech company and you're basically DDOSing their account creation page for all of December. Pick a big tech company and you're probably not gaining any new users. You can't do decentralized auth because then you're just doing authentication DDOS with extra steps.
If I didn't have a github account, I'd probably go with a throwaway reddit account to take part. Reddit doesn't really do the same type of tracking Twitter tries to do and it's probably the least privacy invasive of the bunch.
I've never done this before but honestly I am just turned off by the website and font being hard to read. I get that's the geek aesthetic or whatever, but it's a huge turn off for me.
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
And yet I expect the whole leaderboard to be full of AI submissions...
I am so glad there is no leaderboard this year. Making it a competition really is against the spirit of advent calendars in general. It’s also not a fair competition by default simply due to the issue of time zones and people’s life schedules not revolving around it.
There are plenty of programming competitions and hackathons out there. Let this one simply be a celebration of learning and the enjoyment of problem solving.
I agree with the first point but the second point feels irrelevant. Yeah, people's life schedules don't revolve around it, but that doesn't mean shouldn't make iy a competition. Most people who play on chess.com don't have lives that revolve around it, but that doesn't mean that chess.com should abolish Elo rankings.
Chess doesn't rank people based on how quickly they complete a puzzle after midnight EST (UTC-5). For people in large parts of Asia, midnight EST translates to late morning / early afternoon. This means someone in Asia can complete each AoC puzzle during daylight hours whereas someone in eastern North America will have to complete the puzzle in the middle of the night.
> The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard.
Depends how you look at it. Some of my colleagues rave about Claude Code, so I was thinking about trying it out on these puzzles. In that sense it is "going to the gym", just for a different thing. Since I do AoC every year, I feel like it'll give me a good feel for Claude Code compared to my baseline. And it's not just "prompting", but figuring out a workflow with tests and brainstorming and iteration and all that. I guess if the LLM can just one-shot every puzzle that's less interesting, but I suppose it would be good to know it can do that...
It 100% can do that. LLMs are trained on an unfathomable amount of data. Every AoC puzzle can be solved by identifying the algorithm behind it. Its Leetcode in a friendlier and more festive spirit.
I mean they're great programming tests, for both people and AI I'd argue - like, it'd be impressive if an AI can come up with a solution in short order, especially with minimal help / prompting / steering. But it wouldn't be a personal achievement, and if it was a competition I'd label it as cheating.
Looking forward to it but also sad that it is "only" 12 puzzles, but I completely respect Eric's decision to scale it back.
I've got 500 stars (i.e. I've completed every day of all 10 previous years) but not always on the day the puzzles were available, probably 430/500 on the day. (I should say I find the vast majority of AoC relatively easy as I've got a strong grounding in both Maths and Comp Sci.)
First of all I only found out about AoC in 2017 and so I did 2015 and 2016 retrospectively.
Secondly I can keep up with the time commitments required up until about the 22nd-24th (which is when I usually stop working for Christmas). From then time with my wife/kids takes precedence. I'll usually wrap up the last bits sometime from the 27th onwards.
I've never concerned myself with the pointy end of the leaderboards due to timezones as the new puzzles appear at 5am local time for me and I've no desire to be awake at that time if I can avoid it, certainly not for 25 days straight. I expect that's true of a large percentage of people participating in AoC too.
My simple aim every day is that my rank for solving part 2 of a day is considerably lower than my rank for solving part 1.
(To be clear, even if I was up and firing at 5am my time every day I doubt I could consistently get a top 100 rank. I've got ten or so 300-1000 ranks by starting ~2 hours later but that's about it. Props to the people who can consistently appear in the top 100. I also start most days from scratch whilst many people competing for the top 100 have lots of pre-written code to parse things or perform the common algorithms.)
I also use the puzzles to keep me on my toes in terms of programming and I've completed every day in one of Perl, C or Go and I've gone back and produced solutions in all 3 of those for most days. Plus some random days can be done easily on the command-line piping things through awk, sed, sort, grep, and the like.
The point of AoC is that everyone is free to take whatever they want from it.
Some use it to learn a new programming language. Some use it to learn their first language and only get a few days into it. Some use it to make videos to help others on how to program in a specific language. Some use it to learn how/when to use structures like arrays, hashes/maps, red-black trees, etc, and then how/when to use classic Comp Sci algorithms like A* or SAT solvers, Djikstra's, etc all the way to some random esoteric things like Andrew's monotone chain convex hull algorithm for calculating the perimeter of a convex hull. There are also the mathsy type problems often involving Chinese Remainder Theorem and/or some variation of finite fields.
My main goal is to come up with code that is easy to follow and performs well as a general solution rather than overly specific to my individual input. I've also solved most years with a sub 1 second total runtime (per year, so each day averages less than 40msec runtime).
Anyway, roll on tomorrow. I'll get to the day 1 problem once I've got my kid up and out the door to go to school as that's my immediate priority.
Well, my point, if it wasn’t clear, was that I simply don’t find those problems fun.
I enjoy programming a lot, but most of it comes from things like designing APIs that work well and that people enjoy using, or finding things that allow me to delete on ton of legacy code.
I did try to do the advent of code many times. Usually I get bored half way through reading the first problem. and then when I finally get through I realize that these usually involve tradeoffs that are annoying to make in terms of memory/cpu usage and also several edge cases to deal with.
Well some people like to code and logic puzzles. And especially as it is in its raw form where you can forget all the noise you encounter while coding professionally with many hoops and responsibilities.
I agree. Didn't these puzzles ruin interviewing for many years now. AI came along and they're still doing it. Some things will needlessly drag on before they die I guess
By the same token, AI came along and we all still have intelligence, needless, eh? I mean people reading and writing stuff has nothing to do with AI. I don't see how some people see everything as a zero-sum game.
All AI is doing is solving these puzzles, which proves they don't need any form of intelligence. You're wrong for associating AI with human intelligence. It will never happen. It might be faked once, like the moon landing, but that's it.
How do they ruin interviewing? The whole point of these puzzles is that they’re meant to be fun to solve, not a means to an end, but enjoyable for what they are.
I'm not sure I understand this. Most puzzles are number-crunching but very little to do with graphics (maybe one or two), so no usually OpenGL isn't used AFAIK.
Of course, folks may use it to visualise the puzzles but not to solve them.
I support the no global leaderboard. I was in 7th place last year but quickly got bored maintaining the aggressive AI pipeline required to achieve that. If I wanted to maintain pipelines I'd just do work, and there will never be a good way to prevent people from using AI like this. Advent of Code should be fun, thank you for continuing to do it. I'm looking forward to casually playing this year!
It was pretty boring trying to place against aggressive AI pipelines like yours throughout the explicit requests not to use them[1]. I’m sorry to hear it became boring for you too.
I mean, everyone else was using them too, how can you not? That was the name of the game if you wanted to be competitive in 2024. Not using them would be like trying to do competitive pro cycling without steroids, basically impossible.
Saying everyone else is cheating is not a valid excuse for cheating. It's why aatrong became a pariah, even though he and everyone else was EPO doping.
> You don't need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far.
Every time I see this I wonder how many amateur/hobbyist programmers it sets up for disappointment. Unless your definition of “pretty far” is “a small number of the part ones”, it’s simply not true.
On sorta the same topic:
In the programming world I feel like there's a lot of info "for beginners" and a lot of folks / activities for experts.
But that middle ground world is strange... a lot of it is a combo of filling in "basics" and also touching more advanced topics at the same time and the amount of content and just activities filling that in seems very low. I get it though, the middle ground skilled audience is a great mix of what they do or do not know / can or can not solve.
I don't know if that made any sense.
This is also true of a lot of other disciplines. I’ve been learning filmmaking lately (and editing, colour science, etc). There’s functionally infinite beginner friendly videos online on anything you can imagine. But very little content that slowly teaches the fundamentals, or presents intermediate skills. It’s all “Here’s 5 pieces of gear you need!” “One trick that will make your lighting better”. But that’s mostly it. There’s almost no intermediate stuff. No 3 hour videos explaining in detail how to set up an interview properly. Stuff like that.
Someone else in the thread lamented the problems as "too easy" and I wondered what world I was living in.
Realize in anything, there are people who are much better than even the very best. The people doing official collegiate level competitive programming would find AoC problems pretty easy.
>The people doing official collegiate level competitive programming would find AoC problems pretty easy.
I used to program competitively and while that's the case for a lot of the early day problems, usually a few on the later days are pretty tough even by those standards. Don't take it from me, you can look at the finishing times over the years. I just looked at some today because I was going through the earlier years for fun and on Day 21/2023, 1 hour 20 minutes got you into the top 100. A lot of competitive programmers have streamed the challenges over the years and you see plenty of them struggle on occasion.
People just love to BS and brag, and it's quite harmful honestly because it makes beginner programmers feel much worse than they should.
[dead]
The group of people for which the problems are "too easy" is probably quite small.
According to Eric last year (https://www.reddit.com/r/adventofcode/comments/1hly9dw/2024_...) there were 559 people that had obtained all 500 stars. I'm happy to be one of them.
The actual number is going to be higher as more people will have finished the puzzles since then, and many people may have finished all of the puzzles but split across more than one account.
Then again, I'm sure there's a reasonable number of people who have only completed certain puzzles because they found someone else's code on the AoC subreddit and ran that against their input, or got a huge hint from there without which they'd never solve it on their own. (To be clear, I don't mind the latter as it's just a trigger for someone to learn something they didn't know before, but just running someone else's code is not helping them if they don't dig into it further and understand how/why it works.)
There's definitely a certain specific set of knowledge areas that really helps solve AoC puzzles. It's a combination of classic Comp Sci theory (A*/SAT solvers, Dijkstra's algorithm, breadth/depth first searches, parsing, regex, string processing, data structures, dynamic programming, memoization, etc) and Mathematics (finite fields and modular arithmetic, Chinese Remainder Theorem, geometry, combinatorics, grids and coordinates, graph theory, etc).
Not many people have all those skills to the required level to find the majority of AoC "easy". There's no obvious common path to accruing this particular knowledge set. A traditional Comp Sci background may not provide all of the Mathematics required. A Mathematics background may leave you short on the Comp Sci theory front.
My own experience is unusual. I've got two separate bachelors degrees; one in Comp Sci and one in Mathematics with a 7 year gap between them, those degrees and 25+ years of doing software development as a job means I do find the vast majority of AoC quite easy, but not all of it, there are still some stinkers.
Being able to look at an AoC problem and think "There's some algorithm behind this, what is it?" is hugely helpful.
The "Slam Shuffle" problem (2019 day 22) was a classic example of this that sticks in my mind. The magnitude of the numbers involved in part 2 of that problem made it clear that a naive iteration approach was out of the question, so there had to be a more direct path to the answer.
As I write the code for part 1 of any problem I tend to think "What is the twist for part 2 going to be? How is Eric going to make it orders of magnitude harder?" Sometimes I even guess right, sometimes it's just plain evil.
Yeah, getting 250 or so stars is going to be straightforward, something most programmers with a couple of years of experience can probably manage. Then another 200 or so require some more specialized know-how (maybe some basic experience with parsers or making a simple virtual machine or recognizing a topology sort situation). Then probably the last 50 require something a bit more unusual. For me, I definitely have some trouble with any of the problems where modular inverses show up.
That's a pretty crazy background. I wish you'd put your profile in your bio so I could follow you!
It's just bluffing, lying. People lie to make others think they're hot shit. It's like the guy in school who gets straight A's and says he never studies. Yeah I'll bet.
They... sort of are though? A year or two ago I just waited until the very last problem, which was min-cut. Anybody with a computer science education who has seen the prompt Proof. before should be able to tackle this one with some effort, guidance, and/or sufficient time. There are algorithms that don't even require all the high-falutin graph theory.
I don't mean to say my solution was good, nor was it performant in any way - it was not, I arrived at adjacency (linked) lists - but the problem is tractable to the well-equipped with sufficient headdesking.
Operative phrase being "a computer science education," as per GGP's point. Easy is relative. Let's not leave the bar on the floor, please, while LLMs are threatening to hoover up all the low hanging fruit.
You say in your comment: "Anybody with a computer science education ... should be able to tackle this one" which is directly opposed to what they advertise: "You don't need a computer science background to participate"
Got to agree. I'm even surprised at just how little progress many of my friends and ex-colleagues over the years make given that they hold down reasonable developer jobs.
My experience has been "little progress" is related to the fact that, while AoC is insanely fun, it always occurs during a time of year when I have the least free time.
Maybe when I was in college (if AoC had existed back then) I could have kept pace, but if part of your life is also running a household, then between wrapping up projects for work, finalizing various commitments I want wrapped up for the year, getting together with family and friends for various celebrations, and finally travel and/or preparing your own house for guests, I'm lucky if I have time to sit down with a cocktail and book the week before Christmas.
Seeing the format changed to 12 days makes me think this might be the first time in years I could seriously consider doing it (to completion).
Yep, the years I've made it the furthest have been around the 11-12 day mark. The inevitably life and kids and work get in the way and that's it for another year. Changing to a 12 day format is unlikely to affect me at all :)
In order to complete AoC you need more than just the ability to write code and solve problems. You need to find abstract problem-solving motivating. A lot of people don't see the point in competing for social capital (internet points) or expending time and energy on problems that won't live on after they've completed them.
I have no evidence to say this, but I'd guess a lot more people give up on AoC because they don't want to put in the time needed than give up because they're not capable of progressing.
I've never tried AoC prior but with other complex challenges I've tried without much research, there comes a point where it just makes more sense to start doing something on the backlog at home or a more specific challenge related to what I want to improve on.
I find the problem I have is once I get going on a problem I can't shake it out of my head. I end up lying in bed for hours pleading with my brain to let it go if I've not found the time to finish it during the crumbs of discretionary time in the day!
This type of problem has very little resemblance to the problems I solve professionally - I’m usually one level of abstraction up. If I run into something that requires anything even as complicated as a DAG it’s a good day.
I think this has a lot more to do with time commitment. Once the problems take more than ~1 hour I tend to stop because I have stuff to do, like a job that already involves coding.
Because like 80% of AoC problems require deep Computer science background and deeply specific algorithms almost nobody is using in their day to day work.
Why try any more? There are so many fucking frauds in this field.
It's totally true. I was doing Advent of Code before I had any training or work in programming at all, and a lot of it can be done with just thinking through the problem logically and using basic problem solving. If you can reason a word problem into what it's asking, then break it down into steps, you're 90% of the way there.
The statistics speak a far different story, I’m afraid.
I have a EE background not CS and haven't had too much trouble the last few years. I'm not aiming to be on the global leader board though. I think that with good problem solving skill, you should be able to push through the first 10 days most years. Some years were more front loaded though.
Agreed. I have a CS background and years of experience but I don't get very far with these. At some point it becomes a very large time commitment as well which I don't have
Advent of Code is one of the highlights of December for me.
It's sad, but inevitable, that the global leaderboard had to be pulled. It's also understandable that this year is just 12 days, so takes some pressure off.
If you've never done it before, I recommend it. Don't try and "win", just enjoy the problem solving and the whimsy.
While is „only“ 12 days, are like 24 challenges. As no leaderboard is there, and I do it for fun, i will do it in 24 days.
That sounds healthy! But I would note that there's been interesting community discussions on reddit in past years, and I've gotten caught up in the "finish faster so I can go join the reddit discussion without spoilers". It turns out you can have amazing in-jokes about software puzzles and ascii art - but it also taught me in a very visceral way that even for "little" problems, building a visualizer (or making sure your data structures are easy-to-visualize) is startlingly helpful... also that it's nice to have people to commiserate with who got stuck in the same garden path/rathole that you did.
any recommendations on how to do this?
Same. I usually try to use it as the "real-world problem" I need for learning a new language. Is there anywhere that people have starter advice/ templates for various languages? I'd love to know
- install like this
- initialize a directory with this command
- here are the VSCode extensions (or whatever IDE) that are the bare minimum for the language
- here's the command for running tests
learnxinyminutes.com is a good resource that tries to cover the key syntax/paradigms for each language, I find it a helpful starting point to skim.
This is an area where LLMs can really help out: getting started with an unfamiliar language/IDE/ framework.
The "only" 12 days might be disappointing (but totally understandable), however I won't mourn the global leaderboard which always felt pointless to me (even without the llm, the fact that it depends on what time you did solved problems really made it impractical for most people to actually compete). Private leaderboards with people on your timezone are much nicer.
The global leaderboard was a great way to find really crazy good people and solutions however - I picked through a couple of these guys solutions and learned a few things. One guy had even written his own special purpose language mainly to make AoC problems fast - he was of course a compilers guy.
I think I’ll set up a local leaderboard with friends this year. I was never going to make it to the global board anyway but it is sad to see it go away.
> the global leaderboard had to be pulled.
Frankly I'm better off with it being this way instead of the sweaty cupstacking LLM% speedrun it became as it gained popularity.
And this is how I know I am not a developer/programmer. I have no urge or interest in such event.
Your logic is flawed. You can be a developer and not be interested in AoC. Not being interested in AoC only shows you're not interested in AoC.
Why post, then? No one cares about your lack of interest.
It always seemed odd to me that a persistent minority of HN readers seem to have no interest in recreational programming/technical problems solving and perpetually ask "why should I care?"
It's totally fine not to care, but I can't quite get why you would then want to be an active member in a community of people who care about this stuff for no other reason than they fundamentally find it interesting.
I wonder how this is the most straightforward way to know that?
I _love_ the Advent of Code. I actually (selfishly) love that it's only 12 days this year, because by about half way, I'm struggling to find the time to sit down and do the fantastic problems because of all the holiday activities IRL.
Huge thanks to those involved!
Yeah, last year I only got to Day7 (on dec 26). I hope the smaller amount reduces "the fear of falling behind".
I agree so much. Maybe I'll finally get a year done!
I’m so excited for this year.
Finally that time of year again! I've been looking forward to this for a long time. I usually drop off about halfway anyways (finished day 13, 14 and 13 the previous 3 years), as that's when December gets too busy for me to enjoy it properly, so I personally don't mind the reduction in problems at all, really. I'm just happy we still have great puzzles to look forward to.
Historical note: the original coding advent calendar was the Perl Advent Calendar, started in 2000 and still going.
https://perladvent.org/archives.html
Advent of Code is awesome also of course -- and was certainly inspired by it.
Taking out the public leaderboard makes sense imo. Even when you don't consider the LLM problem, the public Leaderboard's design was never really suited for anyone outside of the very specific short list of (US) timezones where competing for a quick solution was every feasible.
One thing I do think would be interesting is to see solution rate per hour block. It'd give an indication of how popular advent of code is across the world.
I live in Sweden nowadays (UTC+1) and it starts at 6am so last year I woke up at 5:30, grabbed a coffee, and gave it a go.
Got nowhere near the leaderboard times so gave up after four days!
LLMs spoiled it, but it was fun to see the genuine top times. Watching competitive coders solve in real time is interesting (Youtube videos), and i wouldn't have discovered these without the leader board.
I am very happy that we get the advent of code again this year, however I have read the FAQ for the first time, and I must admit I am not sure I understand the reasoning behind this:
> If you're posting a code repository somewhere, please don't include parts of Advent of Code like the puzzle text or your inputs.
The text I get, but the inputs? Well, I will comply, since I am getting a very nice thing for (almost) free, so it is polite to respect the wishes here, but since I commit the inputs (you know, since I want to be able to run tests) into the repository, it is bit of a shame the repo must be private.
If enough inputs are available online, someone can presumably collect them and clone the entire project without having access to the puzzle input generation code, which is the "secret sauce" of the project.
Are you saying that we all have different inputs? I've never actually checked that, but I don't think it's true. My colleagues have gotten stuck in the same places and have mentioned aspects of puzzles and input characteristics and never spoken past each other. I feel like if we had different inputs we'd have noticed by now.
It depends on the individual problem, some have a smaller problem space than others so unique inputs would be tricky for everyone.
But there are enough possible inputs that most people shouldn't come across anyone else with exactly the same input.
Part of the reason why AoC is so time consuming for Eric is that not only does he design the puzzles, he also generates the inputs programmatically, which he then feeds through his own solver(s) to ensure correctness. There is a team of beta testers that work for months ahead of the contest to ensure things go smoothly.
(The adventofcode subreddit has a lot more info on this.)
He puts together multiple inputs for each day, but they do repeat over users. There's a chance you and your colleagues have the same inputs.
He's also described, over the years, his process of making the inputs. Related to your comment, he tries to make sure that there are no features of some inputs that make the problem especially hard or easy compared to the other inputs. Look at some of the math ones, a few tricks work most of the time (but not every time). Let's say after some processing you get three numbers and the solution is their LCM, that will probably be true of every input, not just coincidental, even if it's not an inherent property of the problem itself.
You do get different inputs, but they largely share characteristics so good solutions should always work and naive ones should consistently fail.
There has been the odd puzzle where some inputs have allowed simpler solutions than others, but those have stood out.
I don't know how much they "stand out" because their frequency makes it so that the optimal global leaderboard strat is often to just try something dumb and see if you win input roulette.
if we just look at the last three puzzles: day 23 last year, for example, admitted the greedy solution but only for some inputs. greedy clearly shouldn't work (shuffling the vertices in a file that admits it causes it to fail).
It's only a small selection of inputs.
I have a solve group that calls it "Advent of Input Roulette" because (back when there was a global leaderboard) you can definitely get a better expected score by just assuming your input is weak in structural ways.
I use git-crypt to encrypt the inputs in my public repo https://www.agwa.name/projects/git-crypt/ :)
I don't push my solutions publicly, but I made an input downloader so you can input your cookie from your browser and load (and cache) the inputs rather than commit them.
This is cool. Kudos!
This is not surprising at all, to me. Just commit the example input and write your test cases against that. In a nicely structured solution, this works beautifully with example style tests, like python or rust doctests, or even running jsdoc @example stanzas as tests with e.g. the @linus/testy module.
> Just commit the example input
The example input(s) is part of the "text", and so committing it is also not allowed. I guess I could craft my own example inputs and commit those, but that exceed the level of effort I am willing to expend trying to publish repository no one will likely ever read. :)
The inputs are part of the validation that you did the question, so they're kind of a secret.
I make my code public, and keep my inputs in a private submodule.
I've done all the years and all the problems.
The part I enjoy the most is after figuring out a solution for myself is seeing what others did on Reddit or among a small group of friends who also does it. We often have slightly different solutions or realize one of our solutions worked "by accident" ignoring some side case that didn't appear in our particular input. That's really the fun of it imho.
I had never heard of this before I saw something announcing this years adventure. It looked interesting so I gave it a try, doing 2024. I had a blast. In concept, its very similar to the Euler Project but oriented more towards programming rather than being heavily mathematical. Like Euler, the first part is typically trivial while part 2 can put the hammer down and make you think to devise an approach that can arrive at a solution in milliseconds rather than the death of the universe.
Opinion poll:
Python is extremely suitable for these kind of problems. C++ is also often used, especially by competitive programmers.
Which "non-mainstream" or even obscure languages are also well suited for AoC? Please list your weapon of choice and a short statement why it's well suited (not why you like it, why it's good for AoC).
My favourite "non-mainstream" languages are, depending on my mood at the time, either:
- Array languages such as K or Uiua. Why they're good for AoC: Great for showing off, no-one else can read your solution (including yourself a few days later), good for earlier days that might not feel as challenging
- Raw-dogging it by creating a Game Boy ROM in ASM (for the Game Boy's 'Z80-ish' Sharp LR35902). Why it's good for AoC: All of the above, you've got too much free time on your hands
Just kidding, I use Clojure or Python, and you can pry itertools from my cold, dead hands.
I made my own, with a Haskell+Bash flavor and a REPL that reloads with each keystroke: https://www.youtube.com/watch?v=r99-nzGDapg
This year I've been working on a bytecode compiler for it, which has been a nice challenge. :)
When I want to get on the leaderboard, though, I use Go. I definitely felt a bit handicapped by the extra typing and lack of 'import solution' (compared to Python), but with an ever-growing 'utils' package and Go's fast compile times, you can still be competitive. I am very proud of my 1st place finish on Day 19 2022, and I credit it to Go's execution speed, which made my brute-force-with-heuristics approach just fast enough to be viable.
>I made my own, with a Haskell+Bash flavor and a REPL that reloads with each keystroke
That was impressive! Do you have a public repo with your language, anywhere?
yep, https://github.com/lukechampine/slouch. Fair warning, it's some of the messiest code I've ever written (or at least, posted online). Hoping to clean it up a bit once the bytecode stuff is production-ready.
I like to use Haskell, because parser combinators usually make the input parsing aspect of the puzzles extremely straightforward. In addition, the focus of the language on laziness and recursion can lead to some very concise yet idiomatic solutions.
Example: find the first example for when this "game of life" variant has more than 1000 cells in the "alive" state.
Solution: generate infinite list of all states and iterate over them until you find one with >= 1000 alive cells.
Yes, there are some cool solutions using laziness that aren't immediately obvious. For example, in 2015 and 2024 there were problems involving circuits of gates that were elegantly solved using the Löb function:
https://github.com/quchen/articles/blob/master/loeb-moeb.md
Do you plan to share your solutions on Github or something similar ?
I actually plan on doing this year in Gleam, because I did the last 5 years in Haskell and want to learn a new language this year. My solutions for last year are on github at https://github.com/WJWH/aoc2024 though, if you're interested.
Does this solution copy the state on each iteration?
Haskell values are immutable, so it creates a new state on each iteration. Since most of these "game of life" type problems need to touch every cell in the simulation multiple times anyway, building a new value is not really that much more expensive than mutating in place. The Haskell GC is heavily optimized for quickly allocating and collecting short-lived objects anyway.
But yeah, if you're looking to solve the puzzle in under a microsecond you probably want something like Rust or C and keep all the data in L1 cache like some people do. If solving it in under a millisecond is still good enough, Haskell is fine.
Fun fact about Game of Life is that the leading algorithm, HashLife[1], uses immutable data structures. It's quite well suited to functional languages, and was in fact originally implemented in Lisp by Bill Gosper.
1. https://en.wikipedia.org/wiki/Hashlife
I think Ruby is the ideal language for AoC:
* The expressive syntax helps keep the solutions short.
* It has extensive standard library with tons of handy methods for AoC style problems: Enumerable#each_cons, Enumerable#each_slice, Array#transpose, Array#permutation, ...
* The bundled "prime" gem (for generating primes, checking primality, and prime factorization) comes in handy for at least a few of problems each year.
* The tools for parsing inputs and string manipulation are a bit more ergonomic than what you get even in Python: first class regular expression syntax, String#scan, String#[], Regexp::union, ...
* You can easily build your solution step-by-step by chaining method calls. I would typically start with `p File.readlines("input.txt")` and keep executing the script after adding each new method call so I can inspect the intermediate results.
Perl is my starting point.
It has many of the required structures (hashes/maps, ad hoc structs, etc) and is great for knocking up a rough and ready prototype of something. It's also quick to write (but often unforgiving).
I can also produce a solution for pretty much every problem in AoC without needing to download a single separate Perl module.
On the negative side there are copious footguns available in Perl.
(Note that if I knew Python as well as I knew Perl I'd almost certainly use Python as a starting point.)
I also try and produce a Go and a C solution for each day too:
* The Go solution is generally a rewrite of the initial Perl solution but doing things "properly" and correcting a lot of the assumptions and hacks that I made in the Perl code. Plus some of those new fangled "test" things.
* The C solution is a useful reminder of how much "fun" things can be in a language that lacks built-in structures like hashes/maps, etc.
I used my homemade shell language last year, called elk shell. It worked surprisingly well, better than other languages I've tried, because unlike other shell languages it is just a regular general purpose scripting language with a standard library that can also run programs with the same syntax as function calls.
I use python at work but code these in kotlin. The stdlib for lists is very comprehensive, and the syntax is sweet. So easy to make a chain of map, filter and some reduction or nice util (foldr, zipwithnext, windowed etc). Flows very well with my thought process, where in python I feel list comprehensions are the wrong order, lambdas are weak etc.
I write most as pure functional/immutable code unless a problem calls for speed. And with extension functions I've made over the years and a small library (like 2d vectors or grid utils) it's quite nice to work with. Like, if I have a 2D list (List<List<E>>), and my 2d vec, like a = IntVec(5,3), I can do myList[a] and get the element due to an operator overload extension on list-lists.
and with my own utils and extension functions added over years of competitive programming (like it's very fluent
Go is strong. You get something where writing a solution doesn't take too much time, you get a type system, you can brute-force problems, and the usual mind-numbing boring data-manipulation handling fits well into the standard tools.
OCaml is strong too. Stellar type system, fast execution and sane semantics unlike like 99% of all programming languages. If you want to create elegant solutions to problems, it's a good language.
For both, I recommend coming prepared. Set up a scaffold and create a toolbox which matches the typical problems you see in AoC. There's bound to be a 2d grid among the problems, and you need an implementation. If it can handle out-of-bounds access gracefully, things are often much easier, and so on. You don't want to hammer the head against the wall not solving the problem, but solving parsing problems. Having a combinator-parser library already in the project will help, for instance.
> For both, I recommend coming prepared.
Any recommendations for Go? Traditionally I've gone for Python or Clojure with an 'only builtins or things I add myself' approach (e.g. no NetworkX), but I've been keen to try doing a year in Go however was a bit put off by the verbosity of the parsing and not wanting to get caught spending more time futzing with input lines and err.
Naturally later problems get more puzzle-heavy so the ratio of input-handling to puzzle-solving code changes, but it seemed a bit off putting for early days, and while I like a builtins-only approach it seems like the input handling would really benefit from a 'parse don't validate' type approach (goparsec?).
It's usually easy enough for Go you can just roll your own for the problems at hand. It won't be as elegant as having access to a combinator-parser, but all of the AoC problems aren't parsing problems.
Once you have something which can "load \n seperated numbers into array/slice" you are mostly set for the first few days. Go has verbosity. You can't really get around that.
The key thing in typed languages are to cook up the right data structures. In something without a type system, you can just wing things and work with a mess of dictionaries and lists. But trying to do the same in a typed language is just going to be uphill as you don't have the tools to manipulate the mess.
Historically, the problems has had some inter-linkage. If you built something day 3, then it's often used day 4-6 as well. Hence, you can win by spending a bit more time on elegance at day 3, and that makes the work at day 4-6 easier.
Mind you, if you just want to LLM your way through, then this doesn't matter since generating the same piece of code every day is easier. But obviously, this won't scale.
> It won't be as elegant as having access to a combinator-parser, but all of the AoC problems aren't parsing problems.
Yeah, this is essentially it for me. While it might not be a 'type-safe and correct regarding error handling' approach with Python, part of the interest of the AoC puzzles is the ability to approach them as 'almost pure' programs - no files except for puzzle input and output, no awkward areas like date time handling (usually), absolutely zero frameworks required.
> you can just wing things and work with a mess of dictionaries and lists.
Checks previous years type-hinted solutions with map[tuple[int, int], list[int]]
Yeah...
> but all of the AoC problems aren't parsing problems.
I'd say for the first ten years at least the first ten-ish days are 90% parsing and 10% solving ;) But yes, I agree, and maybe I'm worrying over a few extra visible err's in the code that I shouldn't be.
> if you just want to LLM your way through
Totally fair point if I constrain LLM usage to input handling and the things that I already know that I know how to do but don't want to type, although I've always quite liked being able to treat each day as an independent problem with no bootstrapping of any code, no 'custom AoC library', and just the minimal program required to solve the problem.
It was mind-boggling to see SQL solutions last year: https://news.ycombinator.com/item?id=42577736
This is what is great about it, the community posting hyper-creative (sometimes cursed) solutions for fun! I usually use AoC to try out a new language and that has been fun for me over the years.
I've always done it in a Scheme. Generally to learn a new compiler and its quirks.
Scheme is fairly well suited to both general programming, and abstract math, which tends to be a good fit for AoC.
AoC has been a highlight of the season for me since the beginning in 2015. I experimented with many languages over the years, zeroing in on Haskell, then Miranda as my language of choice. Finally, I decided to write my own language to do AoC, and created Admiran (based upon Miranda and other lazy, pure, functional languages) with its own self-hosted compiler and library of functional data structures that are useful in AoC puzzles:
https://github.com/taolson/Admiran https://github.com/taolson/advent-of-code
I think Crystal, Nim, Julia and F# were my favorites from last year's AoC
I wrote a bit more about it here https://laszlo.nu/blog/advent-of-code-2024.html
AoC is a great opportunity for exploring languages!
I've had a lot of fun using Nim for AOC for many years. Once you're familiar with the language and std lib, its almost as fast to write as python, but much faster (Nim compiles to C, which then gets compiled to your executable). This means that sometimes, if your solution isn't perfect in terms of algorithmic complexity, waiting a few minutes can still save you (waiting 5 mins for your slow Nim code is OK, waiting 5 hours for your slow Python isn't really, for me). Of course all problems have a solution that can run in seconds even in Python, but sometimes it's not the one I figure out first try.
Downsides: The debugging situation is pretty bad (hope you like printf debugging), smaller community means smaller package ecosystem and fewer reference solutions to look up if you're stuck or looking for interesting alternative ideas after solving a problem on your own, but there's still quality stuff out there.
Though personally I'm thinking of trying Go this year, just for fun and learning something new.
Edit: also a static type system can save you from a few stupid bugs that you then spend 15 minutes tracking down because you added a "15" to your list without converting it to an int first or something like that.
Clojure works really well for AOC.
A lot of the problems involve manipulating sets and maps, which Clojure makes really straightforward.
I'll second Clojure not just for the data structures but also because of the high level functions the standard library ships with.
Things like `partition`, `cycle` or `repeat` have come in so handy when working with segments of lists or the Conway's Game-of-Life type puzzles.
This question is really confusing to me because the point of AoC is the fun and experience of it
So.. a language that you're interested in or like?
Reminds me of "gamers will optimize the fun out of a game"
I'm pretty clojure-curious so might mess around with doing it in that
I've done AoC on what I call "hard mode", where I do the solutions in a language I designed and implemented myself. It's not because the language is particularly suited to AoC in any particular way, but it gives me confidence that my language can be used to solve real problems.
Neon Language: https://neon-lang.dev/ Some previous AoC solutions: https://github.com/ghewgill/adventofcode
I’ve always used AoC as my jump-off point for new languages. I was thinking about using Gleam this year! I wish I had more profound reasons, but the pipeline syntax is intriguing and I just want to give it a whirl.
That's a perfectly valid reason.
I tried AoC out one year with the Wolfram language, which sounds insane now, but back then it was just a "seemed like the thing to do at the time" and I'm glad I did it.
I have used Raku (Perl 6) with good results.
Common Lisp. Using 'iterate' package almost feels like cheating.
I have done half a year in (noob level) Haskell long ago. But can't find the code any more.
Most mind blowing thing for me was looking at someone's solutions in APL!
For me (and most of my friends/coworkers) the point of AoC was to write in some language that you always wanted to learn but never had the chance. The AoC problems tend to be excellent material for a crash course in a new PL because they cover a range of common programming tasks.
Historically good candidates are:
- Rust (despite it's popularity, I know a lot of devs who haven't had time to play with it).
- Haskell (though today I'd try Lean4)
- Racket/Common Lisp/Other scheme lisp you haven't tried
- Erlang/Elixir (probably my choice this year)
- Prolog
Especially for those langs that people typically dabble in but never get a change to write non-trivial software in (Haskell, Prolog, Racket) AoC is fantastic for really getting a feel for the language.
I am going to try and stick with Prolog as much as I can this year. Plenty of problems involve a lot of parsing and searching, both could be expressed declaratively in Prolog and it just works (though you do have to keep the execution model in mind).
I used MATLAB last year while I was re-learning it for work. It did okay, but we didn't have a license for the Image Processing Toolbox, which has a boatload of tools for the grid based problems.
My personal choice is always Common Lisp. Absolute swiss army knife.
With both AoC and Project Euler I like seeing how fast I can get my solution to run with SBCL. Finding all palindromic primes below a million in less than a second is pretty neat.
If I remember correctly, one of the competitive programming experts from the global leaderboard made his own language, specifically tailored to help solve AoC problems:
https://github.com/betaveros/noulith
Yes (or so I thought too!), but apparently no: https://blog.vero.site/post/noulith
(post title: "Designing a Programming Language to Speedrun Advent of Code", but starts off "The title is clickbait. I did not design and implement a programming language for the sole or even primary purpose of leaderboarding on Advent of Code. It just turned out that the programming language I was working on fit the task remarkably well.")
It's still very domain-oriented:
> I solve and write a lot of puzzlehunts, and I wanted a better programming language to use to search word lists for words satisfying unusual constraints, such as, “Find all ten-letter words that contain each of the letters A, B, and C exactly once and that have the ninth letter K.”1 I have a folder of ten-line scripts of this kind, mostly Python, and I thought there was surely a better way to do this.
I'll chose to remember it was designed for AoC :-D
I've been using Elixir, which has been wonderful, mostly because of how amazing the built in `Enum` library is for working on lists and maps (since the majority of AoC problems are list / map processing problems, at least for the first while)
Enum really does feel like a superpower sometimes. I’ll knock out some loop and then spend a few mins with h Enum.<tab> and realise it could’ve been one or two Enum functions.
For some grid based problems, I think spreadsheets are very powerful and under-appreciated.
The spatial and functional problem solving makes it easy to reason about how a single cell is calculated. Then simply apply that logic to all cells to come up with the solution.
I've been using Elixir since day one, and it works pretty well :)
I plan to do it in elixir too this year :)
Not sure if Kotlin is non-mainstream, but being able to use the vast Java libraries choice and a much nicer syntax are great boons.
Terse languages with great collection functions in the standard libraries and tail call optimization. Haskell, OCaml, F# ...
I usually do it with ruby with is well suite just like python, but last year I did it with Elixir.
I think it lends itself very well to the problem set, the language is very expressive, the standard library is extensive, you can solve most things functionally with no state at all. Yet, you can use global state for things like memoization without having to rewrite all your functions so that's nice too.
Elixir Livebook is my tool of choice for Advent of Code. The language is well-suited for the puzzles, I can write some Markdown if I need to record some algebra or my thought process, the notebook format serves as a REPL for instant code testing, and if the solution doesn't fit neatly into an executable form, I can write up my manual steps as well.
Crystal. Expressiveness and get-shit-done ability similar to the one of Ruby while being way faster in execution.
I've done some of the problems in R. Vectorized-by-default can avoid a lot of boilerplate. And for problems that aren't in R's happy path, I learn how to optimize in the language. And then I try to make those optimizations non-hideous to read.
My wife did one of the years in Matlab. Some of the problems translate very nicely into vectors and matrices.
Another vote for Haskell. It’s fun and the parsing bit is easy. I do struggle with some of the 2d map style questions which are simpler in a mutable 2d array in c++. It’s sometimes hard to write throwaway code in Haskell!
Kotlin, because it’s a language I like
IMO it's maybe the best suited language to AoC. You can write it even faster than Python, has a very terse syntax and great numerical performance for the few challenges where that matters.
I have been learning lua to do a VR game in lovr, so I'll probably use that to get sharper with it.
I think that whatever you know well is the best choice.
The only way to win is with Brainfuck.
Or MUMPS.
The language doesn't really matter much. I think I keep using PHP as in the years before.
I believe Eric has said he always makes his first solutions with Perl.
I tried to do it in emacs lisp one year. Made it about halfway :)
I’d say Clojure because it has great data manipulation utilities baked into the standard library.
Haskell is my favorite for advent of code. Finally give me an opportunity to think in a pure functional way.
I’ve been doing them is JS and Common Lisp. I recommend the problems to help learning new languages.
I respect the effort going into making Advent of Code but with the very heavy emphasis on string parsing, I'm not convinced it's a good way to learn most languages.
Most problems are 80%-90% massaging the input with a little data modeling which you might have to rethink for the second part and algorithms used to play a significant role only in the last few days.
That heavily favours languages which make manipulating string effortless and have very permissive data structures like Python dict or JS objects.
You are right. The exercises are heavy in one area. Still, for starting in a new language can be helpful: you have to do in/out with files. Data structures, and you will be using all flow control. So you will not be an ace, but can help to get started.
I know people who make some arbitrary extra restriction, like “no library at all” which can help to learn the basics of a language.
The downside I see is that suddenly you are solving algorithmic problems, which some times are bot trivial, and at the same time struggling with a new language.
That's a hard agree and a reason why anyone trying to learn Haskell, OCaml, or other language with minimal/"batteries depleted" stdlib will suffer.
Sure Haskell comes packaged with parser combinators, but a new user having to juggle immutability, IO and monads all at once at the same time will be almost certainly impossible.
Maybe not learning a new language from the ground up, but I think it is good training to "just write" within the language. A daily or twice-daily interaction. Setting up projects, doing the basic stuff to get things running, and reading up on the standard library.
Having smaller problems makes it possible to find multiple solutions as well.
Looks like after the AI automation rush last year, the leaderboard has been removed. Makes sense, a little sad that it was needed though.
I never liked the global leaderboard since I was usually asleep when the puzzles were released. I likely never would have had a competitive time anyway.
I never had any hope or interest to compete in the leaderboard, but I found it fun to check it out, see times, time differences ("omg 1 min for part 1 and 6 for part 2"), lookup the names of the leaders to check if they have something public about their solutions, etc. One time I even ran into the name of an old friend so it was a good excuse to say hi.
I love advent of code, and I look forward to it every year!
I've never stressed out about the leaderboard. Ive always taken it as an opportunity to learn a new language, or brush up on my skills.
In my day-to-day job, I rarely need to bootstrap a project from scratch, implement a depth first search of a graph, or experiment with new language features.
It's for reasons like these that I look forward to this every year. For me it's a great chance to sharpen the tools in my toolbox.
Some part of me would love a job that was effectively solving AoC type problems all the time, but then I'd probably burn out pretty quickly if that's all I ever had to do.
Sometimes it's nice to have a break by writing a load of error handling, system architecture documentation, test cases, etc.
> For me it's a great chance to sharpen the tools in my toolbox.
That's a good way of putting it.
My way of taking it a step further and honing my AoC solutions is to make them more efficient whilst ensuring they are still easy to follow, and to make sure they work on as many different inputs as possible (to ensure I'm not solving a specific instance based on my personal input). I keep improving and chipping away at the previous years problems in the 11 months between Decembers.
I find it interesting how many sponsors run their own "advent of <x>". So far I've seen "cloud", "FPGA", and a "cyber security" one in the sponsors pages (although that last one is one I remember from last year).
I'm also surprised there are a few Dutch language sponsors. Do these show up for everyone or is there some kind of region filtering applied to the sponsors shown?
I love Advent of Code! I have used previous years' problems for my guest lectures to Computer Science students and they have all enjoyed those more than a traditional algorithmic lecture.
A little sad that there are fewer puzzles. But also a glad that I'll see my wife and maybe even go outside during the second half of December this year.
I hope your wife is also glad!
Advent of code is such a fantastic event. I am honestly glad it's 12 days this year, primarily because I would only ever get to day 13 or 14 before it would take me an entire day to finish the puzzles! This would be my fourth year doing AoC. Looking forward to it :)
I plan on doing this year in C++ because I have never worked with it and AoC is always a good excuse to learn a new language. My college exams just got over, so I have a ton of free time.
Previous attempts:
- in Lua https://github.com/Aadv1k/AdventOfLua2021
- in C https://github.com/Aadv1k/AdventOfC2022
- in Go https://github.com/Aadv1k/AdventOfGo2023
really hope I can get all the stars this time...Cheers, and Merry Cristmas!
Other Advent Calendars for developers https://github.com/vimode/Advent-Calendars-For-Developers
I am still updating it for this year, so please feel free to submit a PR or share some here.
BTW the page mentions Alternate Styles, which is an obscure feature in firefox (View -> Page Styles). If you try it out, you will probably run into [0] and not be able to reset the style. The workaround is to open the page in a different tab, which will go back to the default style.
0: https://bugzilla.mozilla.org/show_bug.cgi?id=1943796
I'm actually pleasantly surprised to see a 2025 edition, last year being the 10th anniversary and the LLM situation with the leaderboard were solid indications that it would have been a great time to wrap it up and let somebody else carry the torch.
It's only going to be 12 problems rather than 24 this year and there isn't going to be a gloabl leaderboard, but I'm still glad we get to take part in this fun Christmas season tradition, and I'm thankful for all those who put in their free time so that we can get to enjoy the problems. It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect, I've always just enjoyed the puzzles, so as far as I'm concerned nothing was really lost.
>It's probably an unpopular stance, but I've never done Advent of Code for the competitive aspect
Is this an unpopular stance? Out of a dozen people I know that did/do AoC every year, only one was trying to compete. Everyone else did it for fun, to learn new languages or concepts, to practice coding, etc.
Maybe it helps that, because of timezones, in Europe you need to be really dedicated to play for a win.
>Is this an unpopular stance?
No, it's not. At most 200 people could end up on the global leaderboard, and there are tens of thousands of people who participate most days (though it drops off by the end, it's over 100k reliably for the first day). The vast majority of participants are not there for the leaderboard. If you care about being competing, there are always private leaderboards.
A couple of the Slack/Discord groups I’m in do a local leaderboard with friends. It’s fun to do with a trusted group of people who are all in it for fun.
I'm also in a few local leaderboards, but I'm not "really" competing, it's more of a fun group thing.
Premises:
(i) I love Advent of Code and I'm grateful for its continuing existence in whatever form its creators feel like it's best for themselves and the community;
(ii) none of what follows is a request, let alone a demand, for anything to change;
(iii) what follows is just the opinion of some random guy on the Internet.
I have a lot of experience with competitions (although more on the math side than on the programming side), and I've been involved essentially since I was in high school, as a contestant, coach, problem writer, organizer, moving tables, etc. In my opinion Advent of Code simply isn't a good competition:
- You need to be available for many days in a row for 15 minutes at a very specific time.
- The problems are too easy.
- There is no time/memory check: you can write ooga-booga code and still pass.
- Some problems require weird parsing.
- Some problems are pure implementation challenges.
- The AoC guy loves recursive descent parsers way too much.
- A lot of problems are underspecified (you can make assumptions not in the problem statement).
- Some problems require manual input inspection.
To reiterate once again: I am not saying that any of this needs to change. Many of the things that make Advent of Code a bad competition are what make it an excellent, fun, memorable "Christmas group thing". Coming back every day creates community and gives people time to discuss the problems. Problems being easy and not requiring specific time complexities to be accepted make the event accessible. Problems not being straight algorithmic challenges add welcome variety.
I like doing competitions but Advent of Code has always felt more like a cozy problem solving festival, I never cared too much for the competitive aspect, local or global.
There are definitely some problems that have an indirect time/memory check, in that if you don't have a right-enough algorithm, your program will never finish.
> - The AoC guy loves recursive descent parsers way too much.
The vast majority (though not all) of the inputs can be parsed with regex or no real parsing at all. I actually can't think of a day that needed anything like recursive descent parsing.
I too like the simple nature. If you care about highly performant code, you can always challenge yourself (I got into measuring timing in the second season I participated). Personally I prefer a world like this. Not everyone should have to compete on every detail (I know you stated that your points aren’t demands, I’m just pointing out my own worldview). For any given thing, there will naturally be people that are OK with “good enough”, and people who are interested to take it as far as they can. It’s nice that we can all still participate in this.
One could probably build a separate service that provides a leaderboard for solution runtimes.
I agree that it’s more of a cozy activity than a hardcore competition, that’s what I appreciate about it most.
> The AoC guy loves recursive descent parsers way too much.
LOL!!
I agreed with a lot of what you wrote, but also a lot of us strive for beautiful solutions regardless of time/memory bounds.
In fact, I’m (kind of) tired of leetcode flagging me for one ultra special worst-case scenario. I enjoy writing something that looks good and enjoying the success.
(Not that it’s bad to find out I missed an optimization in the implementation, but… it feels like a lot of details sometimes.)
Do you know of anything like AoC but that feels less contrived? I often spend the most time understanding the problem requirements because they are so arbitrary - like the worst kind of boardgame! Maybe I should go pick up some OSS tickets...
Being contrived, with puns or other weirdness is kinda on par for this kind of problems. Almost every programming competition I've ever been to have those kind of jokes.
Just a random example: https://open.kattis.com/problems/magicallights
But the Kattis website is great. The program runs on their server without you getting to know the input (you just get right/wrong back), so a bit different. But also then gives you memory and time constraints which you for the more difficult problems must find your way out of.
Take a look at Everybody Codes. It occurs in November instead of December, so this year is wrapping up. Like AoC, it is story based but maybe you'll find the problem extraction more to your liking.
https://everybody.codes/events
> The problems are too easy.
The problems are pretty difficult in my book (I never make it past day 3 or so). So I definitely would hope they never increase the difficulty.
I did a post [0] about this last year, and vanilla LLMs didn’t do nearly as well as I’d expected on advent of code, though I’d be curious to try this again with Claude code and codex
[0] https://www.jerpint.io/blog/2024-12-30-advent-of-code-llms/
LLMs, and especially coding focused models, have come a very long way in the past year.
The difference when working on larger tasks that require reasoning is night and day.
In theory it would be very interesting to go back and retry the 2024 tasks, but those will likely have ended up in the training data by now...
> LLMs, and especially coding focused models, have come a very long way in the past year.
I see people assert this all over the place, but personally I have decreased my usage of LLMs in the last year. During this change I’ve also increasingly developed the reputation of “the guy who can get things shipped” in my company.
I still use LLMs, and likely always will, but I no longer let them do the bulk of the work and have benefited from it.
Last April I asked Claude Sonnet 3.7 to solve AoC 2024 day 3 in x86-64 assembler and it one-shotted solutions for part 1 and 2(!)
It's true this was 4 months after AoC 2024 was out, so it may have been trained on the answer, but I think that's way too soon.
Day 3 in 2024 isn't a Math Olympiad tier problem or anything but it seems novel enough, and my prior experience with LLMs were that they were absolutely atrocious at assembler.
https://adventofcode.com/2024/day/3
Last year, I saw LLMs do well on the first week and accuracy drop off after that.
But as others have said, it’s a night and day difference now, particularly with code execution.
Current frontier agents can one shot solve all 2024 AoC puzzles, just by pasting in the puzzle description and the input data.
From watching them work, they read the spec, write the code, run it on the examples, refine the code until it passes, and so on.
But we can’t tell whether the puzzle solutions are in the training data.
I’m looking forward to seeing how well current agents perform on 2025’s puzzles.
They obviously have the puzzles in the training data, why are you acting like this is uncertain?
I know some folks were disappointed with their being 12 puzzles instead of 24 this year, but I never have time to finish anyway so it makes no difference to me lol
I'm just glad they're keeping this going.
Exactly. I have always taken AoC as fun and time to learn. But there is so much going on during December and I do not enjoy doing more than one puzzle a day (it feels like hard work instead of fun). I usually spend time on weekends with kids and family and I am not willing to solve more puzzles during week days so I am falling behind all the time. My plan was always to finish last year puzzles to enjoy more interesting ones but it always felt wrong. So I hope I will have time to finish everything this year :-) But I feel pain from people with enough free time to go full on. I would love to be one of them but there is so much going on everywhere that I have to split my time. Sorry programming world and especially computers :-D
It's really disheartening that the culture has changed so much someone would think doing AoC puzzles just for the fun of it is an unpopular stance :(
Doing things for the fun of it, for curiosity's sake, for the thrill of solving a fun problem - that's very much alive, don't worry!
Eliminating the leaderboard might help. By measuring it as a race, it becomes a race, and now the goal is the metric.
Maybe just have a cool advent calendar thingy like a digital tree that gains an ornament for each day you complete. Each ornament can be themed for each puzzle.
Of course I hope it goes without saying that the creator(s) can do it however they want and we’re nothing but richer for it existing.
That 'digital tree' idea is similar to how AoC has always worked. There's a theme-appropriate ASCII graphic on the problem page that gains color and effects as you complete problems. It's not always a tree, but it was in 2015 (the first year), and in several other years at least one tree is visible. https://adventofcode.com/2015
> By measuring it as a race, it becomes a race, and now the goal is the metric.
It becomes a race when you start seeing it as a race :) One can just... ignore the leaderboard
I've ignored the leaderboard for its entire existence, as the puzzles release at something like 4AM-5AM in my timezone; there's no point getting up 4 hours early, or staying awake 4 hours after bedtime, for some points on the internet.
Instead, getting gold stars for solving the puzzles is incentive enough, and can be done as a relaxing thing in the morning.
No matter what you do, as the puzzles get harder, you won't solve them in a day (or even a lifetime) if you don't come up with good algorithms/methods/heuristics.
I disagree. Having a leaderboard also leaks into the puzzle design. So the experience is different, even if you choose to ignore the leaderboard as a participant.
That’s also completely true and something I often say about gaming. You don’t like achievements? Just don’t do them. Your enjoyment shouldn’t be a function of how others interact with the product.
"Just ignore it" doesn't work, psychologically.
I never, in all the years of participating in AoC did take a look at the global leaderboard.
Even before LLMs I knew it was filled with with results faster then you can blink.
So some of us, from gut feeling the vast majority, it was always just for fun. Usually I spent at least until March to finish as much as I did in every year.
Oh, i’m quite sure it does. In fact, it’s a central thing in so much of psychology. The only difference is how you get there. Some people can just ignore and others take more effort.
I stopped staying up until midnight for the new problem set to be released and instead would do them in the afternoon. Even though I could compare my time to the leaderboard, simply not having the possibility of being on the board removed most of the comparison anxiety.
Lots of people play games while ignoring the achievements.
Many people do - well, did - AoC while ignoring the leaderboard.
This will be my first one! My primary languages are Typescript and Java. Looking forward to it!
While part of the fun is doing the daily tasks with your friends, you can still access the previous years and their challenges if you want to continue after advent!
Small anecdote:
In the IEEEXTREME university programming competition there are ~10k participating teams.
Our university has a quite strong Competitive Programming program and the best teams usually rank in the top 100. Last year a team ranked 30 and it's wasn't even our strongest team (which didn't participate)
This year none of our teams was able to get in the top 1000. I would estimate close to 99% of the teams in the Top 1000 were using LLMs.
Last year they didn't seem to help much, but this year they rendered the competition pointless.
I've read blogs/seen videos of people who got in the AOC global leaderboard last year without using LLMs, but I think this year it wouldn't be possible at all.
Man, those people using LLMs in competitive programming ... where's the fun in that? I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
I’m a very casual gamer but even I run into obvious cheaters in any popular online game all the time.
Cheating is rampant anywhere there’s an online competition. The cheaters don’t care about respecting others, they get a thrill out of getting a lot of points against other people who are trying to compete.
Even in the real world, my runner friends always have stories about people getting caught cutting trails and all of the lengths their running organizations have to go through now to catch cheaters because it’s so common.
The thing about cheaters in a large competition is that it doesn’t take many to crowd out the leaderboard, because the leaderboard is where they get selected out. If there are 1000 teams competing and only 1% cheat, that 1% could still fill the top 10.
Yeah. I was happy to see this called out in their /about
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
> I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
reminds me of something I read in "I’m a high schooler. AI is demolishing my education." [0,1] emphasis added:
> During my sophomore year, I participated in my school’s debate team. I was excited to have a space outside the classroom where creativity, critical thinking, and intellectual rigor were valued and sharpened. I love the rush of building arguments from scratch. ChatGPT was released back in 2022, when I was a freshman, but the debate team weathered that first year without being overly influenced by the technology—at least as far as I could tell. But soon, AI took hold there as well. Many students avoided the technology and still stand against it, but it was impossible to ignore what we saw at competitions: chatbots being used for research and to construct arguments between rounds.
high school debate used to be an extracurricular thing students could do for fun. now they're using chatbots in order to generate arguments that the students can just regurgitate.
the end state of this seems like a variation on Dead Internet Theory - Team A is arguing the "pro" side of some issue, Team B is arguing the "con" side, but it's just an LLM generating talking points for both sides and the humans acting as mouthpieces. it still looks like a "debate" to an outside observer, but all the critical thinking has been stripped away.
0: https://www.theatlantic.com/technology/archive/2025/09/high-...
1: https://archive.is/Lda1x
> high school debate used to be an extracurricular thing students could do for fun.
High school debate has been ruthless for a long time, even before AI. There has been a rise in the use of techniques designed to abuse the rules and derail arguments for several years. In some regions, debates have become more about teams leveraging the rules and technicalities against their opponents than organically trying to debate a subject.
It sucks that the fun is being sucked out of debate, but I guess a silver lining is that the abuse of these tactics helps everyone understand that winning debates isn't about being correct, it's about being a good debater. And a similar principle can be applied to the application of law and public policy as well.
Yeah, it's like bringing a ~bike~ motorcycle to your marathon. But if you can get away with it, there will always be people doing it.
Imagine the shitshow that gaming would be without any kind of anti-cheat measures, and that's the state of competitive programming.
Why is that strange? Competitive programming, as the name suggests, is about competing. If the rules allow that, not using LLM is actually more like running tour de France.
If the rules don't allow that and yet people do then well, you need online qualifiers and then onsite finals to pick the real winners. Which was already necessary, because there are many other ways to cheat (like having more people than allowed in the team).
I'm a bit surprised you can honestly believe that a competition of humans isn't somehow different if allowed to use solution-generators. Like using a calculator in an arithmetic competition. Really?
It's not much different than outlawing performance enhancing drugs. Or aimbots in competitive gaming. The point is to see what the limits of human performance are.
If an alien race came along and said "you will all die unless you beat us in the IEEE programming competition", I would be all for LLM use. Like if they challenged us to Go, I think we'd probably / certainly use AI. Or chess - yeah, we'd be dumb to not use game solvers for this.
But that's not in the spirit of the competition if it's University of Michigan's use of Claude vs MIT's use of Claude vs ....
Imagine if the word "competition" meant "anything goes" automatically.
It's a different kind of fun. Just like doing math problems on paper can be fun, or writing code to do the math can be fun, or getting AI to write the code to do the math can be fun.
They're just different types of fun. The problem is if one type of fun is ruined by another.
It can be a matter of values from your upbringing or immediate environment. There are plenty of places where they value the results, not the journey, and they think that people who avoid cheating are chumps. Think about that: you are in a situation where you just want to do things for fun but everyone around you will disrespect you for not taking the easy way out.
I believe the reason is that many still use CP for hiring, so people go into leetcode (or AdventOfCode) grind, sadly.
Weirdly I feel lot more accepting of LLMs in this type of environment than in making actual products. Point is doing things fast and correct enough. So in someways LLM is just one more tool.
With products I want actual correctness. And not something thrown away.
We’re starting to get to a point where the ai can generate better code than your average developer, though. Maybe not a great developer yet, but a lot of products are written by average developers.
Given what I understand about the nature of competitive programming competitions, using an LLM seems kind of like using a calculator in an arithmetic competition (if such a thing existed) or a dictionary in a spelling bee.
These contests are about memorizing common patterns and banging out code quickly. Outsourcing that to an LLM defeats the point. You can say it's a stupid contest format, and that's fine.
(I did a couple of these in college, though we didn't practice outside of competition so we weren't especially good at it.)
The goal of "actual projects" is also fast and correct enough though
In 1997, Deep Blue beat Gary Kasparov, the world chess champion. Today, chess grandmasters stand no chance against Stockfish, a chess engine that can run on a cheap phone. Yet chess remains super popular and competitive today, and while there are occasional scandals, cheating seems to be mostly prevented.
I don’t see why competitive debate or programming would be different. (But I understand why a fair global leaderboard for AOC is no longer feasible).
Online chess competitions actually spend quite a lot on preventing cheating, and even then it's a common talking point.
When I did competitions like these at uni (~10-15 years ago), we all used some thin-clients in the computer lab where the only webpages one could access were those allowed by the competition (mainly the submission portal). And then some admin/organizers would feed us and make sure people didn't cheat. Maybe we need to get back to that setup, heh.
Serious in-person competitions like ICPC are still effective against cheating. The first phase happens in a limited number of venues and the computers run a custom OS without internet access. There are many people watching so competitors don't user their phones, etc.
The Regional Finals and World Finals are in a single venue with a very controlled environment. Just like the IOI and other major competitions.
National High School Olympiads have been dealing with bigger issues because there are too many participants in the first few phases, and usually the schools themselves host the exams. There has been rampant cheating. In my country I believe the organization has resorted to manually reviewing all submissions, but I can only see this getting increasingly less effective.
This year the Canadian Computing Competition didn't officially release the final results, which for me is the best solution:
> Normally, official results from the CCC would be released shortly after the contest. For this year’s contest, however, we will not be releasing official results. The reason for this is the significant number of students who violated the CCC Rules. In particular, it is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help. As such, the reliability of “ranking” students would neither be equitable, fair, or accurate.
Available here: [PDF] https://cemc.uwaterloo.ca/sites/default/files/documents/2025...
Online competitions are just hopeless. AtCoder and Codeforces have rules against AI but no way to enforce them. A minimally competent cheater is impossible to detect. Meta Hacker Cup has a long history and is backed by a large company, but had its leaderboard crowded by cheaters this year.
Oof. I had a great time cracking the top 100 of Advent of Code back in 2020. Bittersweet to know that I got in while it was still a fun challenge for humans.
Going blind with uiua this year.
For those who think this is a typo, uiua [1] (pronounced "wee-wuh") is a stack-based array programming language.
I solved a few problems with it last year, and it is amazing how compact the solutions are. It also messes with your head, and the community surrounding it is interesting. Highly recommended.
[1] https://www.uiua.org/
Related:
Uiua – A stack-based array programming language - https://news.ycombinator.com/item?id=42590483 - Jan 2025 (6 comments)
Uiua: A minimal stack-based, array-based language - https://news.ycombinator.com/item?id=37673127 - Sept 2023 (104 comments)
I want to try doing it in assembly using fasm this year.
Could either be really recreational and relaxing.. or painful and annoying.
Though I don't care even if it takes me all of next year, it's all in order to learn :)
Where my Gleamlins at? Who else is using Gleam this year?
I am! I love the design of Gleam in theory but keep bouncing off it so I’m interested in seeing if AoC will help me give it a fair shake.
Excited to see AOC back and I think it was a solid idea to get rid of the global leaderboard.
We (Depot) are sponsoring this year and have a private leaderboard [0]. We’re donating $1k/each for the top five finishers to a charity of their choice.
[0] https://depot.dev/events/advent-of-code-2025
Isn't a publicly advertised private leaderboard - especially with cash prizes - against the new guidance? Certainly the spirit of the guidance.
>What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I've made it so you can share a read-only view of your private leaderboard. *Please don't use this feature or data to create a "new" global leaderboard.*)
This kind of crap is the reason we can’t just enjoy an AoC anymore.
Agreed, reward participation, not results.
i don't think it should be a charity of their choice. i think it should have to be one of the top 5 most reputable charities in the world, like doctors without borders or salvation army.
I've been looking forward to this!
It's kotlin and shik for me this year, probably a bit of both. And no stupid competitions, AoC should be fun.
https://gitlab.com/codr7/shik
I usually use multiple languages. Ocaml anf Go are always a pick. This year i think i want to try Gleam, and Haxe too.
I'd like to play, sadly you can't without logging in with google, github, etc.
You can always create a throwaway account on one of those services. It's not that hard.
You could, but you shouldn't have to. If you want to sign up for XYZ, you need to sign up for BigCorp, you need to add your phone number to verify your account, etc.
No thanks.
You can log in with reddit, dont need a phone number for that one. And if you have an HN account you probably have a reddit acct lol
The "etc" is pretty important here. You can log in using Reddit, and you can create a random throwaway Reddit account without filling in any other details (no email address or phone number required).
I believe they no longer allow new accounts without an email address.
It used to be that reddit had a user creation screen that looked like you needed to input an email address, but you could actually just click "Next" to skip it.
The last time I had cause to make a reddit account, they no longer allowed this.
You can use a garbage email address, you don't have to verify it
Having done my own auth I get why they do it this way. LLMs are already a massive problem with AoC, I imagine an anonymous endpoint to validate solutions would be even worse.
Having done auth myself, I can also understand why auth is being externalised like this. The site was flooded with bots and scrapers long before LLMs gained relevance and adding all the CAPTCHAs and responding to the "why are you blocking my shady CGNAT ISP when I'm one of the good ones" complaints is just not worth it. Let some company with the right expertise deal with all of that bullshit.
I'd wish the site would have more login options, though. It's a tough nut to crack; pick a small, independent oauth login service not under control of a bit tech company and you're basically DDOSing their account creation page for all of December. Pick a big tech company and you're probably not gaining any new users. You can't do decentralized auth because then you're just doing authentication DDOS with extra steps.
If I didn't have a github account, I'd probably go with a throwaway reddit account to take part. Reddit doesn't really do the same type of tracking Twitter tries to do and it's probably the least privacy invasive of the bunch.
Agreed. I think having an option for codeberg would be great
Hate to be that guy, but this is unreadably small text on mobile.
Does anyone know about any good sysadmin advent?
I propose Advent of Outage: just pull a random plug in the server room every day.
How about something like
Probably needs some external tool for the rnd function.On a serious note, I just saw this: https://linuxupskillchallenge.org
That's hardly an "upskill" imo. You would know almost all of it by running a linux server for a month or two
https://sadservers.com/advent
I haven't set up an advent event (maybe I should) but you can do yourself a challenge a day from SadServers.com
I can't wait to try this year's challenges.
I've never done this before but honestly I am just turned off by the website and font being hard to read. I get that's the geek aesthetic or whatever, but it's a huge turn off for me.
There's a relevant FAQ with a solution for you:
https://adventofcode.com/2025/about#faq_highcontrast
Is it just me, or does it seem to be temporarily down?
It won't load for me right now
It's up for me (but the first puzzle won't be available until 15 hours from now).
Would love to know which exotic and niche languages are people going to use for this year. I am personally thinking of trying out Crystal or Elixir
I’m probably going to use rescript. Though I may do Gleam or Roc.
If you're feeling adventurous and would like to try Roc's new compiler, I put together a quick tutorial for it!
https://gist.github.com/rtfeldman/f46bcbfe5132d62c4095dfa687...
It is quite odd to call this advent when it ends halfway into the month rather than on Christmas. But I will have fun doing them either way
It may have made more sense to start on Christmas Day, matching the Twelve Days of Christmas [1].
[1] https://en.wikipedia.org/wiki/Twelve_Days_of_Christmas
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
And yet I expect the whole leaderboard to be full of AI submissions...
Edit: No leaderboard this year, nice!
I am so glad there is no leaderboard this year. Making it a competition really is against the spirit of advent calendars in general. It’s also not a fair competition by default simply due to the issue of time zones and people’s life schedules not revolving around it.
There are plenty of programming competitions and hackathons out there. Let this one simply be a celebration of learning and the enjoyment of problem solving.
I agree with the first point but the second point feels irrelevant. Yeah, people's life schedules don't revolve around it, but that doesn't mean shouldn't make iy a competition. Most people who play on chess.com don't have lives that revolve around it, but that doesn't mean that chess.com should abolish Elo rankings.
Chess doesn't rank people based on how quickly they complete a puzzle after midnight EST (UTC-5). For people in large parts of Asia, midnight EST translates to late morning / early afternoon. This means someone in Asia can complete each AoC puzzle during daylight hours whereas someone in eastern North America will have to complete the puzzle in the middle of the night.
The global leaderboard encouraged bad behavior against the entire project. Including criminal things like attempting to ddos the site.
afai your elo score don't depend of your timezone
Yea fully agree. The leaderboards always made me feel bad.
Not this time:
> The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard.
Depends how you look at it. Some of my colleagues rave about Claude Code, so I was thinking about trying it out on these puzzles. In that sense it is "going to the gym", just for a different thing. Since I do AoC every year, I feel like it'll give me a good feel for Claude Code compared to my baseline. And it's not just "prompting", but figuring out a workflow with tests and brainstorming and iteration and all that. I guess if the LLM can just one-shot every puzzle that's less interesting, but I suppose it would be good to know it can do that...
It 100% can do that. LLMs are trained on an unfathomable amount of data. Every AoC puzzle can be solved by identifying the algorithm behind it. Its Leetcode in a friendlier and more festive spirit.
I mean they're great programming tests, for both people and AI I'd argue - like, it'd be impressive if an AI can come up with a solution in short order, especially with minimal help / prompting / steering. But it wouldn't be a personal achievement, and if it was a competition I'd label it as cheating.
> And yet I expect the whole leaderboard to be full of AI submissions...
There will be no global leaderboard this year.
i don't think there is a global leaderboard this year. just private ones.
[dead]
[dead]
[flagged]
Looking forward to it but also sad that it is "only" 12 puzzles, but I completely respect Eric's decision to scale it back.
I've got 500 stars (i.e. I've completed every day of all 10 previous years) but not always on the day the puzzles were available, probably 430/500 on the day. (I should say I find the vast majority of AoC relatively easy as I've got a strong grounding in both Maths and Comp Sci.)
First of all I only found out about AoC in 2017 and so I did 2015 and 2016 retrospectively.
Secondly I can keep up with the time commitments required up until about the 22nd-24th (which is when I usually stop working for Christmas). From then time with my wife/kids takes precedence. I'll usually wrap up the last bits sometime from the 27th onwards.
I've never concerned myself with the pointy end of the leaderboards due to timezones as the new puzzles appear at 5am local time for me and I've no desire to be awake at that time if I can avoid it, certainly not for 25 days straight. I expect that's true of a large percentage of people participating in AoC too.
My simple aim every day is that my rank for solving part 2 of a day is considerably lower than my rank for solving part 1.
(To be clear, even if I was up and firing at 5am my time every day I doubt I could consistently get a top 100 rank. I've got ten or so 300-1000 ranks by starting ~2 hours later but that's about it. Props to the people who can consistently appear in the top 100. I also start most days from scratch whilst many people competing for the top 100 have lots of pre-written code to parse things or perform the common algorithms.)
I also use the puzzles to keep me on my toes in terms of programming and I've completed every day in one of Perl, C or Go and I've gone back and produced solutions in all 3 of those for most days. Plus some random days can be done easily on the command-line piping things through awk, sed, sort, grep, and the like.
The point of AoC is that everyone is free to take whatever they want from it.
Some use it to learn a new programming language. Some use it to learn their first language and only get a few days into it. Some use it to make videos to help others on how to program in a specific language. Some use it to learn how/when to use structures like arrays, hashes/maps, red-black trees, etc, and then how/when to use classic Comp Sci algorithms like A* or SAT solvers, Djikstra's, etc all the way to some random esoteric things like Andrew's monotone chain convex hull algorithm for calculating the perimeter of a convex hull. There are also the mathsy type problems often involving Chinese Remainder Theorem and/or some variation of finite fields.
My main goal is to come up with code that is easy to follow and performs well as a general solution rather than overly specific to my individual input. I've also solved most years with a sub 1 second total runtime (per year, so each day averages less than 40msec runtime).
Anyway, roll on tomorrow. I'll get to the day 1 problem once I've got my kid up and out the door to go to school as that's my immediate priority.
Personally, I never understood the grind of the advent of code. This is exactly the kind of stuff I am grateful to be able to delegate to a LLM.
why would delegate to an LLM something that is supposed to be fun. THIS specifically is the kind of stuff you shouldn't delegate to an LLM
Because they are not a hacker, but post on hackernews every day.
Well, my point, if it wasn’t clear, was that I simply don’t find those problems fun.
I enjoy programming a lot, but most of it comes from things like designing APIs that work well and that people enjoy using, or finding things that allow me to delete on ton of legacy code.
I did try to do the advent of code many times. Usually I get bored half way through reading the first problem. and then when I finally get through I realize that these usually involve tradeoffs that are annoying to make in terms of memory/cpu usage and also several edge cases to deal with.
it really feels more like work than play.
I never understood the craze for "Advent of code". Already at this time of the year the last thing I want to do is code even more.
Well some people like to code and logic puzzles. And especially as it is in its raw form where you can forget all the noise you encounter while coding professionally with many hoops and responsibilities.
People like different things
and dislike different things
I mean people use the internet to find people who like similar things.
Why would you use a site called HackerNews if you are not a hacker? No idea.
I code for fun, even in December.
I agree. Didn't these puzzles ruin interviewing for many years now. AI came along and they're still doing it. Some things will needlessly drag on before they die I guess
By the same token, AI came along and we all still have intelligence, needless, eh? I mean people reading and writing stuff has nothing to do with AI. I don't see how some people see everything as a zero-sum game.
All AI is doing is solving these puzzles, which proves they don't need any form of intelligence. You're wrong for associating AI with human intelligence. It will never happen. It might be faked once, like the moon landing, but that's it.
How do they ruin interviewing? The whole point of these puzzles is that they’re meant to be fun to solve, not a means to an end, but enjoyable for what they are.
Tell HR, they don't seem to get it
Anyone doing this in OpenGL?
I'm not sure I understand this. Most puzzles are number-crunching but very little to do with graphics (maybe one or two), so no usually OpenGL isn't used AFAIK.
Of course, folks may use it to visualise the puzzles but not to solve them.
You definitely could do it all in shaders. People have done crazier things.
I support the no global leaderboard. I was in 7th place last year but quickly got bored maintaining the aggressive AI pipeline required to achieve that. If I wanted to maintain pipelines I'd just do work, and there will never be a good way to prevent people from using AI like this. Advent of Code should be fun, thank you for continuing to do it. I'm looking forward to casually playing this year!
It was pretty boring trying to place against aggressive AI pipelines like yours throughout the explicit requests not to use them[1]. I’m sorry to hear it became boring for you too.
[1] https://web.archive.org/web/20241201070128/https://adventofc...
I mean, everyone else was using them too, how can you not? That was the name of the game if you wanted to be competitive in 2024. Not using them would be like trying to do competitive pro cycling without steroids, basically impossible.
Saying everyone else is cheating is not a valid excuse for cheating. It's why aatrong became a pariah, even though he and everyone else was EPO doping.
Gotta love the classic "everyone else is cheating too"
It’s more like playing a casual tournament at your local chess club without an engine.
i felt like that 2-3 months ago, at the east new zealand chess games, when i forgot my anal beads.
"It was boring to run a cycling contest on a motorbike."
Although there are now rumours of hidden motors in Tour de France bicycles. So, I guess it's the same.
The FAQ was pretty clear about not using AI to get on the leaderboard last year.
So, publicly admitting that you broke the rules and are part of the reason we can't have nice things. Why?
this is why we can't have nice things