Does anyone else feel the desperation oozing out when they read these kinds of posts/browse linkedin? I sort of get the same frantic desperate vibe, cool if you own it though I guess
I think humans are hard-wired to distrust and dislike certain forms of self-promotion because of the risk of false signalling. In small tribes of apes everybody knows everyone so trumpeting one’s accomplishments is basically trying to change people’s perception of something without changing the actual underlying signal.
The higher status strategy almost always ends up being countersignaling, where “trying too hard” is basically the opposite of counter signaling. The problem (this is something I am actively learning in my work) is that the way society is set up right now requires you to participate in the “attention economy” and build your brand/reputation in a group far larger than an ape-sized tribe. Because you’re not established in those circles a priori you have to start with signaling instead of counter signaling.
Basically, you have to have a PR team and win the hearts and minds of The Atlantic and Forbes before you can make a public spectacle of your ketamine habits. If you skip straight to that you’re just an insecure loser with a drug problem. But after everybody knows you and what you’ve done then you can establish yourself as a tortured artist, which is socially “better” than being just a regular artist.
As someone who attempted it, it was such a bad challenge. Firstly you can get close to optimal pretty easily. In first challenge it was easy to exactly solve it in optimal way using DP. Secondly that doesn't matter because the optimal solution has big deviation based on rng. And you just need to submit the challenge multiple times till you get lucky.
That's why challenge problem should take in code and run for hidden cases on their server and reveal the results post contest, not allow it via API call.
The ones at the top of the board, especially on the first challenge not only had to have a good algorithm, but also had to get lucky. And of course if you submit too many times to get just as lucky as they did, they ban you. Stupid contest.
>PS: And the kicker? Claude wrote this entire article too. I just provided the direction and feedback. The AI that helped me solve the Berghain Challenge also helped me tell you about it.
Sorry, will move it up. For me it was, hey let’s loop it once more over the repo and let it write about it, more like an archeological dig to unearth the process, as I wasn’t involved in it, especially not algo decisions and later optimizations.
It’s funny, I immediately thought it was LLM but I was fairly confident it was ChatGPT. I suppose the styles are converging more than I thought: too long, lists, “not just x it’s y”, “here’s the X”…
The process was confusing. It took for sure at least 100 sessions, haven’t checked the logs, was running it in a VM. And every time it tries to build something while forgetting all of it and then reconstructing bits and pieces just to continue. The article was written at least over 3 context lengths. But it does reconstructs well the iteration on the side of an AI agent.
I mean, it only takes a few paragraphs of filler text, hyperbole, "catchy" juxtapositions, and loose logical threads to raise suspicions.
But yeah, I would also like these two minutes of my life back.
Well, as someone who has also generated some text with LLMs, at least I learned that it's still possible to generate truly excruciating stuff with the "right" model and prompt.
Wow. I thought the tone of TFA was infuriating. Now I know why (I quit in disgust before reaching the end where he clarifies this).
I guess AI-slop in writing will be the norm now.
(I wonder if Claude repeatedly quoting itself saying "you're absolutely right!" was edited in by the human author, or yet another case of unintentional humor).
No reason, all fine. Honestly, it is very hard to find time to write anything down and imagine 15k words deep analysis of the process like this. And LLMs are ideal log keepers. I also did bunch of similar experiments like doing a research paper from data to code and writing. We can only expect things to get better from here.
That should be moved toward the top, will do it manually. This was 98% loop, albeit very messy one. More an exploration of the process itself. The biggest value is the meta learning. Ideally we should save traces of prompts and process itself as a verifiable or observable artifact, instead of code itself. At the end of the day, outcome over code.
It was. The author admits at the end Claude wrote the entire article.
Note the self-parodic humor in Claude quoting itself saying "you're absolutely right!". The author claims they didn't direct this, it truly is how Claude "sees" itself!
Does anyone else feel the desperation oozing out when they read these kinds of posts/browse linkedin? I sort of get the same frantic desperate vibe, cool if you own it though I guess
I think humans are hard-wired to distrust and dislike certain forms of self-promotion because of the risk of false signalling. In small tribes of apes everybody knows everyone so trumpeting one’s accomplishments is basically trying to change people’s perception of something without changing the actual underlying signal.
The higher status strategy almost always ends up being countersignaling, where “trying too hard” is basically the opposite of counter signaling. The problem (this is something I am actively learning in my work) is that the way society is set up right now requires you to participate in the “attention economy” and build your brand/reputation in a group far larger than an ape-sized tribe. Because you’re not established in those circles a priori you have to start with signaling instead of counter signaling.
Basically, you have to have a PR team and win the hearts and minds of The Atlantic and Forbes before you can make a public spectacle of your ketamine habits. If you skip straight to that you’re just an insecure loser with a drug problem. But after everybody knows you and what you’ve done then you can establish yourself as a tortured artist, which is socially “better” than being just a regular artist.
As someone who attempted it, it was such a bad challenge. Firstly you can get close to optimal pretty easily. In first challenge it was easy to exactly solve it in optimal way using DP. Secondly that doesn't matter because the optimal solution has big deviation based on rng. And you just need to submit the challenge multiple times till you get lucky.
That's why challenge problem should take in code and run for hidden cases on their server and reveal the results post contest, not allow it via API call.
The ones at the top of the board, especially on the first challenge not only had to have a good algorithm, but also had to get lucky. And of course if you submit too many times to get just as lucky as they did, they ban you. Stupid contest.
Yeah, also would be fun to see the code behind the solutions.
>PS: And the kicker? Claude wrote this entire article too. I just provided the direction and feedback. The AI that helped me solve the Berghain Challenge also helped me tell you about it.
>Meta-collaboration all the way down.
Would've preferred to know this going in.
Sorry, will move it up. For me it was, hey let’s loop it once more over the repo and let it write about it, more like an archeological dig to unearth the process, as I wasn’t involved in it, especially not algo decisions and later optimizations.
It’s funny, I immediately thought it was LLM but I was fairly confident it was ChatGPT. I suppose the styles are converging more than I thought: too long, lists, “not just x it’s y”, “here’s the X”…
I suspected that, but I couldn't read to the end. The article is confusing and all over the place.
The process was confusing. It took for sure at least 100 sessions, haven’t checked the logs, was running it in a VM. And every time it tries to build something while forgetting all of it and then reconstructing bits and pieces just to continue. The article was written at least over 3 context lengths. But it does reconstructs well the iteration on the side of an AI agent.
I mean, it only takes a few paragraphs of filler text, hyperbole, "catchy" juxtapositions, and loose logical threads to raise suspicions.
But yeah, I would also like these two minutes of my life back.
Well, as someone who has also generated some text with LLMs, at least I learned that it's still possible to generate truly excruciating stuff with the "right" model and prompt.
> Why This Challenge Will Make You Question Everything
This kind of headlines make an article an annoying read.
I think chatgpt is their writing partner too. Maybe the other way around.
That's discosed at the ending of the "article".
Wow. I thought the tone of TFA was infuriating. Now I know why (I quit in disgust before reaching the end where he clarifies this).
I guess AI-slop in writing will be the norm now.
(I wonder if Claude repeatedly quoting itself saying "you're absolutely right!" was edited in by the human author, or yet another case of unintentional humor).
Nope, pure Claude there, during editing itself.
Thanks for replying. Now I feel I must apologize for my rudeness.
I think the experiment itself was valuable, you did find something interesting.
I just cannot help it, I hate reading AI slop, and I'm depressed that this seems to be the future of internet writing.
No reason, all fine. Honestly, it is very hard to find time to write anything down and imagine 15k words deep analysis of the process like this. And LLMs are ideal log keepers. I also did bunch of similar experiments like doing a research paper from data to code and writing. We can only expect things to get better from here.
They admitted it was Claude at the end.
That should be moved toward the top, will do it manually. This was 98% loop, albeit very messy one. More an exploration of the process itself. The biggest value is the meta learning. Ideally we should save traces of prompts and process itself as a verifiable or observable artifact, instead of code itself. At the end of the day, outcome over code.
The actual technical problem was interesting, but the AI-generated writing is terrible. It's like listening to a sales pitch that just won't end.
slightly offtopic, but the challenge might be leaking emails: just got an email from `alfredw@listenlabs.fyi` (note the TLD):
> I'd like to connect you with our team to hear about your solution.
> 1) can you let me know availability for a conversation?
> 2) please share some basic information ie full name, Linkedin, portfolio, CV.
> 3) are you interested in onsite SF?
But wasn’t this an idea, they have built the game as a hiring process.
the address is fake though
Not necessarily, it redirects to their main domain. It is typically done like this when sending massive emails not to pollute the main domain rating.
This reads like it was written by an LLM:
``` Here’s what Listen did that was pure genius:
```It was. The author admits at the end Claude wrote the entire article.
Note the self-parodic humor in Claude quoting itself saying "you're absolutely right!". The author claims they didn't direct this, it truly is how Claude "sees" itself!
this blog post sounds like an LLM - and of course it was written by one (as admitted in the end).