I'm not really up to date, but a couple recent papers that I found interesting were:

In theoretical distributed computing, The Space Complexity of Concencus From Swap [0] solves a problem that has been open for a couple decades, and won Best Paper at PODC 2022.

In quantum complexity theory, MIP^* = RE [1] was really big deal when it was published in 2020. It got a (relative) ton of press coverage, and there are lots of articles and blog post available that give a high-level of the result and techniques used. I like this one [2] from Quanta Magazine.

Except that for someone outside CS theory it will be hard to tell what is routine and what is ground-breaking. The proceedings for the recent editions of top theory conferences are here:

They're mostly paywalled but conference or full versions can usually be found on arxiv.org, and there are successful efforts to host paywalled papers illegally for the public good.

Personally, I am impressed by the progress on approximate counting and sampling. Developments include localization schemes (FOCS), average-case entropic independence (FOCS), the sampling/counting Lovász local lemma in (FOCS and SODA), and approximate counting with global constraints (STOC).

That is very true, except in the form of other languages.

It is often said about Lisp (and also about Erlang) that other languages sometime in their lifetime implement some features of Lisp, but in an inferior way :)

I could be wrong, but I think the OP was asking about "CS Theory" meaning the branch of computer science that deals with the theory of computation. Turing Machines, Automata, The Halting Problem, etc. Although those things are all old. I'm not sure what's new, so I can't answer his question.

That isn’t theoretical computer science. In this day and age, that’s firmly in the hands of engineers.

And while AI might be the current big thing, I don’t imagine it will be at the forefront of engineering. I think quantum computing takes the prize there.

Koka [1] has “algebraic effect handlers” which are pretty cool. An alternative to monads for modeling effects in a sound way.

Lots of advances in theory of gradual typing. See with by Ben Greenman et al.

That’s just stuff from my neck of the woods. (PL) I would recommend reading stuff from POPL or ICFP or OOPSLA.

[1]: https://koka-lang.github.io/koka/doc/index.html

I'm not really up to date, but a couple recent papers that I found interesting were:

In theoretical distributed computing,

The Space Complexity of Concencus From Swap[0] solves a problem that has been open for a couple decades, and won Best Paper at PODC 2022.In quantum complexity theory,

MIP^* = RE[1] was really big deal when it was published in 2020. It got a (relative) ton of press coverage, and there are lots of articles and blog post available that give a high-level of the result and techniques used. I like this one [2] from Quanta Magazine.[0] https://dl.acm.org/doi/abs/10.1145/3519270.3538420

[1] https://arxiv.org/abs/2001.04383

[2] https://www.quantamagazine.org/landmark-computer-science-pro...

Maximum Flow and Minimum-Cost Flow in Almost-Linear Time

https://news.ycombinator.com/item?id=31675015

https://arxiv.org/abs/2203.00671

Best to look at recent papers at arxiv https://arxiv.org/list/cs/recent

Except that for someone outside CS theory it will be hard to tell what is routine and what is ground-breaking. The proceedings for the recent editions of top theory conferences are here:

- FOCS (Foundations of Computer Science) https://www.computer.org/csdl/proceedings/focs/2022/1JtvJWOs...

- STOC (Symposium on the Theory of Computer Science) https://dl.acm.org/doi/proceedings/10.1145/3519935

- SODA (Symposium on Discrete Algorithms) https://epubs.siam.org/doi/book/10.1137/1.9781611977554

They're mostly paywalled but conference or full versions can usually be found on arxiv.org, and there are successful efforts to host paywalled papers illegally for the public good.

Personally, I am impressed by the progress on approximate counting and sampling. Developments include localization schemes (FOCS), average-case entropic independence (FOCS), the sampling/counting Lovász local lemma in (FOCS and SODA), and approximate counting with global constraints (STOC).

You can have a look at Baez's work

https://www.azimuthproject.org/azimuth/show/HomePage https://johncarlosbaez.wordpress.com/

computational category theory. Lots of cool stuff there.

> computational category theory

There's also the compositional game theory which is based on it:

https://arxiv.org/abs/1603.04641

HVAC

Lisp is making a comeback

Lisp has been making a comeback since the 60s

> Lisp has been making a comeback since the 60s

That is very true, except in the form of other languages.

It is often said about Lisp (and also about Erlang) that other languages sometime in their lifetime implement some features of Lisp, but in an inferior way :)

Reminds me of a saying about GaAs semiconductors - “they’re the technology of the future, and they always will be.”

[dead]

Most bleeding edge stuff is going to be designing the compute pipelines/hardware for ML.

I could be wrong, but I think the OP was asking about "CS Theory" meaning the branch of computer science that deals with the theory of computation. Turing Machines, Automata, The Halting Problem, etc. Although those things are all old. I'm not sure what's new, so I can't answer his question.

That isn’t theoretical computer science. In this day and age, that’s firmly in the hands of engineers.

And while AI might be the current big thing, I don’t imagine it will be at the forefront of engineering. I think quantum computing takes the prize there.

This is really not TCS. I’d say ML foundations count as part of TCS, like VC dimensions, learning theory, and all kinds of complexity.

Programming language pl research in haskell omega has a lot of bleeding edge theoretical cs stuff too