I'm surprised at how normal some of the unseen words are. I expected them to all be archaic or niche, but many are pretty reasonable: 'congregant', 'definer', 'stereoscope'.
It's likely that the commenter has read less than 5 million posts worth of text though. So perhaps this still points to a lack of diversity in content.
You got me wondering. Supposing the average post is 10 words, and a typical page of text is 250 words, that would only be ~50 pages of text a day over the last 10 years. Which I don't think I manage, but over 20 years I am probably in that window.
I noticed one of the cited bluesky posts was all in French, so one might argue that technically it didn't find the English word "mouch", but rather a different French word that happens to be spelled the same. But trying to sort that out seems unrealistically challenging. "Mouch" is only in the dictionary as an alternative spelling to mooch, so probably a pretty rare word to see in English.
Bluesky lets you select the language your post is written in before posting it and it is attached as metadata to the skeet. I guess the backend for this only searches posts in English, but it's possible the dataset is not 100% accurate due to some users forgetting to switch language before posting.
I'm very curious as to how this works in the backend. I realize it uses Bluesky's firehose to get the posts, but I'm more curious on how it's checking whether a post contains any of the available words. Any guesses?
Hey! this is my site - it's not all that complex, i'm just using a sqlite db with two tables - one for stats, the other for all the words that's just word | count | first use | last use | post.
using this, a combo of "covered enough" for the bit and easy to use
also, since i'm tracking every word (technically a better name for this project would be The Bluesky Corpus) all inflected forms are different words, which aligns with my thinking
You can probably fit all words under 10-15MB of memory, but memory optimisations are not even needed for 250k words...
Trie data structures are memory-efficient for storing such dictionaries (2-4x better than hashmaps). Although not as fast as hashmaps for retrieving items. You can hash the top 1k of the most common words and check the rest using a trie.
The most CPU-intensive task here is text tokenizing, but there are a ton of optimized options developed by orgs that work on LLMs.
I very much hope that the backend uses one of the bluesky jetstream endpoints.
When you only subscribe to new posts, it provides a stream of around 20mbit/s last time I checked, while the firehose was ~200mbit/s.
Probably just a big hashtable mapping word -> the number of times it's been seen, and another hashset of all the words it hasn't seen. When a post comes in you hash all the words in it and look them up in the hashtable, increment it, and if the old value was 0 remove it from the hash set.
250k words at a generous 100 bytes per word is only 25MB of memory...
Maybe I'm being naive, but with only ~275k words to check against, this doesn't seem like a particularly hard problem. Ingest post, split by words, check each word via some db, hashmap, etc... and update metadata.
> We just visited wheal Martyn museum in Cornwall, nice scones and a waterwheel, they also have a lot of gutters, sluices and pipes and a bit of a fixation about China Clay. More importantly they appear to be unattached at the moment
Both "wheal" (kind of cheating, that should be Wheal and is a place name) and "sluices" were new to the dictionary.
fascinating! I think it's really cool that this is possible, and at the same time kine of sad that the norm is slowly moving towards more locked-down APIs.
I checked out the author's other projects and this is common issue. For example, he has a "lean checker" for bluesky that claims it is right-leaning simply because of all the people saying "That's right," "He was right," etc. None of the supposed right-leaning posts were actually conservative in nature. They just used to word right to mean correct.
one, thank you for checking my website. two, that is the joke, 100% - at the time people kept talking about how "left leaning" bsky was and that idea came to mind
Not an answer to your question, but I suspect most people don't -- my bot (a pi searcher bot, of course) just runs on Jetstream, which is pretty lightweight and heavily compressed.
I'm surprised at how normal some of the unseen words are. I expected them to all be archaic or niche, but many are pretty reasonable: 'congregant', 'definer', 'stereoscope'.
For what it's worth, there's 1.7bn posts on Bluesky according to this: https://bsky.jazco.dev/stats
The dictionary site has only checked 4,920,000 posts, which is 0.28% of all messages.
It now claims to have checked 11 million posts but only seen "the" 16 thousand times. I'm not sure its numbers are entirely reliable.
It's likely that the commenter has read less than 5 million posts worth of text though. So perhaps this still points to a lack of diversity in content.
You got me wondering. Supposing the average post is 10 words, and a typical page of text is 250 words, that would only be ~50 pages of text a day over the last 10 years. Which I don't think I manage, but over 20 years I am probably in that window.
dentel, exclaustrations, gryding, datolite, frabbing?
I can't keep up with all these new Pokemon.
I noticed one of the cited bluesky posts was all in French, so one might argue that technically it didn't find the English word "mouch", but rather a different French word that happens to be spelled the same. But trying to sort that out seems unrealistically challenging. "Mouch" is only in the dictionary as an alternative spelling to mooch, so probably a pretty rare word to see in English.
Bluesky lets you select the language your post is written in before posting it and it is attached as metadata to the skeet. I guess the backend for this only searches posts in English, but it's possible the dataset is not 100% accurate due to some users forgetting to switch language before posting.
Is this not working or am I missing something, it just shows as seeing 0 words for me. Firefox on a PC.
You may need to allow scripts from the domain avibagla.com, it shows 0 when the scripts are blocked.
All the scripts are ERR_SSL_PROTOCOL_ERROR for me in Chrome, I'm assuming because of a corporate firewall.
Indeed, all I get in Firefox are CORS issues
ugh, it ought to be building the results on the server and serving up static pages.
But it updates live...
It could do both...
then go build it…
For me it took a minute to start loading data and switch from just showing 0.
Same... maybe you need a Bluesky account, which I don't have.
It doesn't... I can open it in a private browsing window.
It's working fine for me on Firefox
I'm very curious as to how this works in the backend. I realize it uses Bluesky's firehose to get the posts, but I'm more curious on how it's checking whether a post contains any of the available words. Any guesses?
Hey! this is my site - it's not all that complex, i'm just using a sqlite db with two tables - one for stats, the other for all the words that's just word | count | first use | last use | post.
I... did not expect this to be so popular
What is your source dictionary to compare to? Seems kind of small. Also, how are you handling inflected forms?
https://github.com/words/an-array-of-english-words
using this, a combo of "covered enough" for the bit and easy to use
also, since i'm tracking every word (technically a better name for this project would be The Bluesky Corpus) all inflected forms are different words, which aligns with my thinking
What are the table sizes?
And what ingress bandwidth do you have?
DB is currently 58mb (damn lol)
Ingress is actually pretty manageable, ~900kbps
You can probably fit all words under 10-15MB of memory, but memory optimisations are not even needed for 250k words...
Trie data structures are memory-efficient for storing such dictionaries (2-4x better than hashmaps). Although not as fast as hashmaps for retrieving items. You can hash the top 1k of the most common words and check the rest using a trie.
The most CPU-intensive task here is text tokenizing, but there are a ton of optimized options developed by orgs that work on LLMs.
I very much hope that the backend uses one of the bluesky jetstream endpoints. When you only subscribe to new posts, it provides a stream of around 20mbit/s last time I checked, while the firehose was ~200mbit/s.
yes it does!
Probably just a big hashtable mapping word -> the number of times it's been seen, and another hashset of all the words it hasn't seen. When a post comes in you hash all the words in it and look them up in the hashtable, increment it, and if the old value was 0 remove it from the hash set.
250k words at a generous 100 bytes per word is only 25MB of memory...
Maybe I'm being naive, but with only ~275k words to check against, this doesn't seem like a particularly hard problem. Ingest post, split by words, check each word via some db, hashmap, etc... and update metadata.
I think the cool part is watching words go brrr.
Someone just got a double-combo:
> We just visited wheal Martyn museum in Cornwall, nice scones and a waterwheel, they also have a lot of gutters, sluices and pipes and a bit of a fixation about China Clay. More importantly they appear to be unattached at the moment
Both "wheal" (kind of cheating, that should be Wheal and is a place name) and "sluices" were new to the dictionary.
I did this against a pretty large tweet archive and got hits on about 125k of the words in the unix dictionary.
For a moment I thought it would be an AT-Proto based Urban Dictionary clone.
This
fascinating! I think it's really cool that this is possible, and at the same time kine of sad that the norm is slowly moving towards more locked-down APIs.
> slowly moving towards
Depends what we accept as norm.
I just saw it indexed "eluvium," but the post was referring to a band with that same name
I checked out the author's other projects and this is common issue. For example, he has a "lean checker" for bluesky that claims it is right-leaning simply because of all the people saying "That's right," "He was right," etc. None of the supposed right-leaning posts were actually conservative in nature. They just used to word right to mean correct.
one, thank you for checking my website. two, that is the joke, 100% - at the time people kept talking about how "left leaning" bsky was and that idea came to mind
lmao that's fantastic
GeologySky will get to it soon enough.
Thanks to this I just learned about alluvium, eluvium, illuvium, and colluvium.
I've wondered how blueksy affords the bandwidth to let anyone stream the full firehose.
Not an answer to your question, but I suspect most people don't -- my bot (a pi searcher bot, of course) just runs on Jetstream, which is pretty lightweight and heavily compressed.
(The website in question uses jetstream also.)
From what they say, it is a lot, but it's generally on the order of a few hundreds of connections total at the moment
This website is so pretty!
thank you!! design support and advice from my good friend vedantswarup.com
Words We Haven't Seen
- Search unseen words
made me chuckle
I've found content for all of my future skeets.
So now someone is simply posting a dictionary
I'm just surprised that there's revolt when Bluesky posts are used for LLMs, but regular NLP is fine for some reason.