> Something I noticed at the time was that the syntax for functional languages tends to be verb then noun: f(x), whereas the syntax for object oriented languages tends to be noun then verb: x.f().... There's a big difference in usability though: auto-complete.
Auto-complete also really helps with discoverability. Consider checking for the presence of a key in a map/dictionary. In Java, auto-complete will quickly lead you to `map.containskey(key)`. In Python, though, you'd have to know that the syntax is `if value in dict`.
Now, let's check for the presence of a value. Auto-complete again quickly leads you to `map.containsValue(value)`. In Python, Google tells me that it's `if value in dict.values()`, which seems more difficult to stumble upon.
And that's for a built-in data structure. A lot of my job involves trudging through other people's code, trying to figure out how they architect their ball of cats. Auto-complete is a great tool for that; it lets you quickly and easily poke around, and see what the various nouns in the system can do.
This is an important point. When done well, in FP the data "disappears", i.e. the types control flow and the functions talk about what you're doing. So there's no "find me what I c an do with this object" instead it's "What am I returning?" It sounds the same but it isn't.
Yes, way in the heart of a functional program there's some code sorting lists of ints or something. But by that time, it's all labeled to the point where the functions just tell you what happens. As someone browsing the source, the nouns disappear.
There's a spectrum here. Someone even more explicit than you may insist that `average` should be split up into explicit bindings for the mapping to Int, the sum, and the count.
Also, for non-toy examples, there are real benefits to composing transformations, so that you don't pay the memory and performance costs of assigning intermediate sequences.
You're not alone; those long chains are annoying to decipher. The other benefit of what you wrote is that if you have to track down a bug in the chain, you can actually examine the intermediate variables directly instead of having to pick apart a long chain so that a variable is exposed for you to look at in the debugger.
I agree. Naming parts of your code is important.
An alternative to named local vals is to either use named functions instead of lambdas: people.filter(olderThan50) or (I use Kotlin) use named extension functions: people.countOlderThan50()
Yeah I much prefer that style because it reads left to right like I naturally read as an English speaker. It flows more naturally from the original object and the changes applied to it.
I have found myself using pipes many times in Elm, because they make the code so much more readable. Now I understand why!
I haven't seen an autocomplete that works on pipes. But with a good type system, it is technically possible to list all compatible functions that take "noun" as first parameter.
Sometimes you supply partially applied functions (also with pipes in noun->verb order) and that's not so easy to match:
noun
|> verb1
|> (otherNoun >> verb2)
|> etc
But perhaps even functions that take "noun" as nth parameter could be suggested.
If verbs from the object flowing you want, "That style to me is preferable" you should say. (Style is the object, preferring it is the verb)
English reads left to right overall, but writing instructions in English does not flow smoothly either way. e.g. "shuffle a deck of playing cards then deal four" reads more naturally than either "a deck of playing cards shuffle then four cards deal" or "deal four cards after shuffling a deck of playing cards".
shuffle(cards).deal(4) is a mix of both approaches.
Yes yes, english doesn't follow this noun-verb order but that's not what I'm actually talking about.
Usually the languages either encourage cards.shuffle().deal(4) or deal(shuffle(cards),4).
In the 'noun-verb'/OOP version the sequence of modifications follows the english reading order where basically I only have to keep information about the last result in my head. The functional verb-noun version I have to pop in and out of layers: 'ok we're dealing what?, something shuffled, what are we shuffling?, the cards object/variable, ok we're dealing shuffled cards how many are we dealing? 4.'
In general the noun-verb follows the sequence of operations applied by the computer so it's easier to read.
That's where I was going as well, LISP style and APL style claims "reads left to right" but having to build up a stack of buffered work which you can only unwind once you get to the end, is annoying and unhelpful and limiting.
But simply turning it around to cards.shuffle().deal(4) isn't a good answer, it's still the case that you can only put a small number of things together in a chain because it stops making sense. If the next move was to start the game, "cards.shuffle().deal(4).startgame('some-card-game')" does not make sense because starting the game is not something the cards do, but "start(deal(shuffle(cards), 4), 'some-card-game')" can make sense because start is a function which takes a board state and a game to start. It describes the world state, not the cards and their abilities.
That is, neither style is right, but a mixed style where both approaches are available and you can mix and match to be more expressive, works much better, IMO. Chunking a small number of things with prefix into one operation, or with postfix into one operation, but combining those chunks flexibly at a larger scale.
English is worse as it is SVO. So a more English like language would be all infix operators. I thunk part of ambiguity is that English can be used in both modes.
I was really just talking about the left to right order of modification not the actual order of English sentences. Apparently I was very unclear because that's all the responses I'm getting.
> Auto-complete also really helps with discoverability. Consider checking for the presence of a key in a map/dictionary. In Java, auto-complete will quickly lead you to `map.containskey(key)`. In Python, though, you'd have to know that the syntax is `if value in dict`.
This isn't an argument for noun-verb over verb-noun, though it's an argument for more homogeneous syntax. `if value in dict` isn't noun-verb OR verb-noun.
There's not really a reason that f(<tab> couldn't complete with arguments (and it does in many systems). I think if you wanted to use autocomplete to make a point, you'd have to argue that it's more effective somehow to complete on verbs than on nouns.
I also think that the object-oriented syntax matches my thinking process more closely: for example, I have an array of strings, and I want to convert them to numbers, pick only the odd ones then sum them rather than "I want to sum some things…things which I'm filtering by parity…which are actually strings I'm converting to integers".
I think that can be solved without bringing objects into the mix -- functions are usually grouped up into modules of some kind, so leading with `Foo.` (or `foo::` in C++, etc.) still provides a useful level of scoping that can aid autocomplete.
On the other hand, tools like Hoogle [0] let you search for functions based on type, so you can search for e.g. `ByteString -> _` to find functions that take a bytestring. There's no reason the same paradigm can't be applied in an IDE.
It is (was?) the primary way to maintain binary compatibility between versions of class files, where switching an attribute to a method (if logic such as verification needed to be added in a future version) would break that compatibility.
See my cousin comment - it maintains compatibility when changes are internal to the class. For example, if a field was switched from concrete to derived, you'd have to switch the attribute to a getter, which would break compatibility with the previous version.
But exactly because it's easier to "stumble upon" the right answer, I think this may be worse for games. I like feeling as if it's necessary to come up with the solution in my head and then do it. It's not satisfying if every puzzle can be quickly solved by selecting each item in the room and trying the couple actions the game offers you for that item.
The Python approach also exposes unnecessary details and by doing so causes a performance hit by forcing you to get a list of keys/values and then find out of what you’re looking for is in them. This is slower than what the dictionary could do internally: hash the key and check if there’s an entry for it in the backing store.
This is one of the reasons I like ruby better than python... none of the magic 'dunder' methods... if you want to define a custom adder for your class, so "foo + bar" works, you do "def +()" in ruby instead of "def __add__ ()" in Python.... I love that ruby just uses the actual operator instead of some arbitrary method name you have to remember.
Ruby do have equivalents to Python magic methods... disguised as normal methods. For example Python `__hash__` is Ruby `hash` and both are used by built-in dict/hash values and you should be aware of that fact in both. There are some philosophical differences as well, Python likes to have standalone functions that are customized via magic methods (e.g. `str` vs. `__str__`, you don't normally call the latter) and in Ruby everything is method (e.g. `to_s`). It really seems like a matter of taste.
I remember doing some research on language for a psychology course in college and I think it's important to warn hacker news that the expieriences people have with nouns being more intuitive and fundamental is absolutely not a cultural universal. Westerners place unusual emphasis on nouns and while it may be that "get" is less specific than "item_id" in English, it may be more be the opposite if you speak a language that has specific and concrete verbs and abstract nouns. This is one of those times where your intuition about what is logical and obvious might be wrong.
Speaking of which, I wonder whether we'll program in Tamarian one day. Whether we'll be able to build functioning software from a really high-level abstractions, and perhaps not try to micromanage it too much.
(And then when something goes wrong, the program will simply say, "Shaka, when the walls fell.")
> vim's commands like d0 are verb then text selection (noun), whereas in more conventional text editors (including Emacs) you'd first select some text (the noun) and then invoke a verb like delete.
I would say vim is also noun-verb as you can select text and then tell it to perform an action on it, but supports convenience methods for verb-noun, or just verb. It's just that most the verbs also have a default selection they apply to as well, whether it be a character, a line, or some larger block of content. Thus, 'x' deletes one character, but 10x deletes 10 characters, using 'v' to select text and then x deletes the selected characters. 'dd' deletes the default (current) line, '10dd' (or 'd10d' to mix it up) deletes the next 10 lines, using 'v' to select a range of lines and then 'd' deletes those lines. Additionally, you can just define a range of lines to apply a command do in command mode: 10,20d deletes lines 10-20.
I think vim makes more sense if you think of it like a Forth. In a general sense, the entire language is noun*verb (zero or more nouns followed by a verb).
I know what you're going to say, "dw is a verb noun!" but really, it's not. w is verb--it's a function that, in this case, takes the d function and applies it to a word. The fact that it is a verb is proven by the fact that the expression executes when you type the verb. It's unfortunate that the vim verb "w" corresponds to the English noun "word", but that's probably the best that could be done given there are only so many keys on the keyboard.
This gives words to a frustration that I've had with many roguelikes. I don't want to /drink/ -> /ladder/, I want to do ladder things with the ladder, which should be a very short list. Freedom does not great game design make (by default).
The problem with applying this to some roguelikes, such as Nethack, is that part of the fun is finding out what you can do. Getting a handy contextual menu when you find a sink, or an alter, or whatever makes the game easier and less confusing, but also takes away a wonderful aspect of it when you learn or hear about some new crazy aspect of it.
That's not to say it doesn't have a place in roguelikes, just that each game needs to carefully consider what it brings to the table, and also what it cuts out.
> takes away a wonderful aspect of it when you learn or hear about some new crazy aspect of it.
It might be useful to think about this in term of how many players it affects.
My opinion is that for each seasoned roguelike player who enjoys this "wonderful" aspect, there will be multiple roguelike newbies who will be discouraged by the unfriendly UX and just leave.
That encapsulates what happened in the transition from text parser (keyboard-driven) games to mouse-driven games: discoverability went up, complexity went down; the first generation of gamers lost interest and a second larger generation of gamers came into being.
I actually find the opposite to be true. I enjoyed not having a clear idea of what everything did when I was just starting out. Once I knew what I could do with most things it became annoying that I didn't have an easier interface because there wasn't potential in the complexity anymore
There's nothing saying that one should prefer wider audience to a smaller one. Ultimately, the quest of appealing to the lowest common denominator lowers a ceiling for possible enjoyment.
> My opinion is that for each seasoned roguelike player who enjoys this "wonderful" aspect, there will be multiple roguelike newbies who will be discouraged by the unfriendly UX
This is a strange viewpoint. This "wonderful" aspect of NetHack applies solely to newbies; seasoned players already know what they can do.
At first blush it might seem that simplifying the interface loses this element. But really as you allude to having "fun finding out what you can do" is just a design goal that can be accomplished even with a simpler interface.
For example crafting systems or other forms of modifiers. I combine water with my sticks and get wet sticks then use those to make a camp fire which makes it extra smokey. Which leads to more introducing more systemic interactions and the beauty of those is that systems that interact have great potential for emergent behavior.
Simplifying Nethack's interface may well make it a worse game but it's not the case that a simpler interface in another game implies losing out on the fun of discovery.
> Simplifying Nethack's interface may well make it a worse game but it's not the case that a simpler interface in another game implies losing out on the fun of discovery.
Sure. That's what I was trying to say in the second paragraph. It's not a matter one being better than the other in general, or even for roguelikes. It's that there are things to consider about how you interact with the system in every case, so it deserves some attention.
For Nethack, I think the game is better for the interface not giving you clues what you can and cannot do. For other games, that likely isn't the case (very few support both the breadth of unique actions and make discovering those part of the draw of the game).
I think you can have both. You could start out not knowing what you can do to any noun and through attempting verbs->noun you can build a library of options. Once you know you can apply a verb to a noun that "unlocks" the noun->verb menu.
That would retain the "discovery" aspect of the game while also providing the convenience. You could even spin it into a game mechanic with "confusion" (randomizing your noun->verb options) and "amnesia" (erasing your noun->verb options).
If we weren’t talking about a game, I would agree with you. It’s frustrating not knowing what you can do with what. But I like NetHack‘s verb-first approach. If you had a noun-first approach then it would basically list every single action for every object anyway, because it makes sense. Yes, you can eat a cream pie or wield it as a weapon or throw it at an enemy. You can use a towel to wipe your face or wear it as a blindfold or wield it as a weapon.
The big drawback of noun-first is that it makes the player feel less creative because there’s no discovery: every object lists what it can do on the label. That’s no fun!
I like engraving with my wands and dipping weapons in holy water! I like polymorphing into a metallivore and eating metal rings to gain their properties! I like dipping one potion into another to create weird effects via alchemy!
I've been working on a roguelike on and off for (checks watch) about twenty years now.
When I started I was an avid Angband player. I had just gotten my first laptop computer and was frustrated by how difficult it was to play using the limited keyboard without a full numeric keypad. The main problem is that the default Angband keyset uses up almost all of the letters for all of the different verbs: quaff, read, cast, pray, wave, aim, throw, etc. (There is an alternate keyset based on Vi, but I could never internalize the "arrow keys".)
It was especially frustrating because, like Amit notes here, most verbs only apply to a few items. You can't read a potion or quaff a scroll, so allocating two separate keys to those actions is redundant. So in a fit of pique, I decided I would make my own roguelike with a single "use" command that could use all kinds of items.
I did learn something interesting about usability in the process. One nice feature of Angband's keyset is that it's harder to accidentally use the wrong item. If you intend to quaff a potion but accidentally pick an inventory slot containing a scroll, nothing happens. The specific verb commands act like a sort of redundancy check for the operation. But, overall, I think having a single use command is better.
This is orthogonal to whether the verb or noun comes first. I've just reduces the number of verbs by collapsing many of them into a single multi-purpose "use". I've gone through several iterations of the UI for the game and I'm still on the fence as to whether it makes more sense to select the item or the operation first. It's a little tricky in a roguelike because you usually need to select the item from something: either your inventory, equipment, or on the ground.
So if you want to drink a potion from your backpack, it could be any of:
- Use -> inventory -> potion
- Inventory -> use -> potion
- Inventory -> potion -> use
The first option is good most of the time because the player does know what action they want to perform. But the other two are good because they give the UI a chance to show the player the inventory before they make a selection. The first option feels like a stab in the dark where if you don't know what's in your inventory, you don't know if you have anything to use in the first place.
Of course, if the UI always passively shows the inventory, that problem goes away. So the visual design affects the order that operations might make the most sense. It's a hard problem.
> You can't read a potion or quaff a scroll, so allocating two separate keys to those actions is redundant.
I'm a longtime NetHack player who switched to Crawl, and I've been thinking a lot about the differences between the two games.
One neat NetHack feature is that objects often do have unexpected uses (as well as uses in combination with one another), and at least in a few cases you can make these uses by applying an unusual action to an object. The first example that comes to mind is that when polymorphed into a different monster, you may be able to eat things that your human form couldn't, sometimes with especially desirable (or undesirable) effects. NetHack players probably appreciate on the whole that the game doesn't actively suggest this possibility to them and that they have to think of it or try it to see what will happen.
Another example is that there are a couple of things that can be used as weapons to good effect that are not obviously weapons, so the ability to wield arbitrary objects is important there.
On the other hand, NetHack uses these possibilities as a source of humor and challenge to the player (partly to create a slightly more open-world feeling, typified by the saying that "the dev team thinks of everything", and partly to increase the amount of knowledge that a player can master and bring to bear on the game). Crawl has a very different philosophy and the actions available to the player are, compared to NetHack, more straightforward and obvious in their implications.
I think it is a matter of critical mass. Crawl had so few instances of "you need to be spoiled or try random stuff to get this" that pressure got them removed (although always controversial). Nethack lives and breathes it, and it has so many that you are likely to feel the benefits of this whimsy
> NetHack players probably appreciate on the whole that the game doesn't actively suggest this possibility to them and that they have to think of it or try it to see what will happen.
I did try in explore mode to see whether the game prompts you with unusually-edible items when you're polymorphed, and it actually does, so it's a slightly less hidden feature than I was thinking.
Yes, I love the idea of being able to apply multiple verbs to the same item. In my game, you can use, drop, and throw things.
The problem with Angband is that the verbs are mostly disjoint. The only thing you can quaff is a potion, the only thing you can read is a scroll.
Part of my motivation for collapsing all of those "use" verbs to a single use command is because it frees up opportunities to add new operations that can be applied to a range of item types. It frees up keyboard space.
> This is orthogonal to whether the verb or noun comes first. I've just reduces the number of verbs by collapsing many of them into a single multi-purpose "use".
Right! When reading the article I had a bit of a feeling of hesitancy: the ubiquity and general-purpose-ness of functions in FP is the whole point!
Creating a single "use" is like using polymorphism to write a generic function, in a sense. You might imagine `use :: a -> Action` or something. As opposed to the article's take, which calls for `readScroll :: Scroll -> Action` and `openDoor :: Door -> Action`, etc.
> The main problem is that the default Angband keyset uses up almost all of the letters for all of the different verbs: quaff, read, cast, pray, wave, aim, throw, etc. (There is an alternate keyset based on Vi, but I could never internalize the "arrow keys".)
I don't understand how most of the keys being occupied can be the "main problem", if you're not able to use those keys directionally anyway.
Without a separate numeric keypad, you need to find some keys to map to the eight cardinal directions. Diagonal movement is key to the game, so you can't just use the actual arrow keys. So, ideally, I wanted a 3x3 rough square of keys on the main keyboard area that I could allocate for movement.
In Angband, most of the letters and punctuation on the main keyboard area are already allocated. You can rebind them, but since the different items need different commands, I don't think that there was any way to collapse multiple verbs onto the same key.
No, but the existence of the roguelike keyset demonstrates that there is space on the keyboard for every command. Why do you need to collapse multiple verbs onto the same key?
IMO that's mostly historical due to the input devices available.
Terminal games were played with a keyboard, so feature creep resulted in every damn key on keyboard being used.
Most games slowly evolved away from that when joysticks and mouses became the dominant input devices (although I remember flight simulators still using most of the keyboard) but there were lots of games were people were still using the keyboard because it was faster (like FPS games).
Nowadays, with touch screens, using a keyboard is no sensible option and you naturally land a "first select the object then what you want to do with it".
But with roguelikes, you still often have a Verb-Object interface with keyboard shortcuts unless you play roguelikes that were designed from the start with a mobile interface in mind.
IIRC, this was one of the breakthroughs of the XEROX PARC virtual desktop research: the realization of how universal the pattern "select context, then use menus or keyboard accelerators to select verbs applicable to that context" could be.
On the other hand, you really start appreciating 'D'rink, 'W'ear, 'E'at once you make it a bit further (progress in a dungeon) and get some experience with the game. It just saves time.
With a system based on 'U'se, you're trading using fewer keyboard keys for more keypresses. You need to prefix most of actions with 'U'. Even Crawl lets you carry a-z items. When the list of items grows big, you either lose time scrolling through it to find a potion, or need to press and extra key to filter potions. In Crawl, when you press 'e', it only shows you edible items.
And for people so keen to compare with natural languages, we don't say use a sandwich, use flour or use paperclip. Natural languages could get away with just having few verbs, but you would have to make communication longer to clarify. Incidentally this is how English looks for a non-native english speaker. While words may be shorter on average, you need to string more of them together to get the same meaning. An extreme case of this is Toki Pona, which by design has 120 words for you to combine. An intentionally simple language based on belief simplicity leads to happiness.
The bottom line: over-reliance on few keys means the interface is optimized for low learning curve, not for long term use. Think Nano vs Vim.
The bottom line 2: the article sounds like it's written by a new convert. Noun first has merits and areas where it's better, but it's not something that cures cancer.
I think you're confusing game design with learning curve.
Computer games have evolved towards minimal learning curve, because it sells more games. Back when computers were mostly used by nerds, those people weren't bothered by having to read long manuals. The average computer user today IS bothered.
This is in stark contrast with board games. Board gamers must fully know rules to know how to play. When you play a board game, you have a higher initial learning curve, but once you do know, you immediately perform actions you want to make. In a typical modern computer game, you'd have to go through a sequence of menus.
BTW this is the definition of a game I subscribe to. A game is a set of rules. By this definition, most Call of Duty games are the same game.
One of the things I dislike about the OO noun.verb() syntax is that frequently you have multiple nouns. For example in some graphics systems, you've got a canvas to draw on, a pen which knows styles for line thickness or dashes, a brush for how to fill, and a shape to be drawn. Who owns the verb? They kind of all do:
canvas.draw(pen, brush, circle)
circle.draw(canvas, pen, brush)
... and so on
The other issue I have is in closed object systems where you can't add new methods to classes in the library. Say I want to add a new type of shape, maybe an emoji face composed of several circles. Depending which class owns the methods, my new smiley method stands out from the other shapes which are provided by the library, and I think the lack of symmetry is ugly:
For simple cases, inheritance solves this, but then you get into worse trouble when Alice builds a derived class to support smileys, and Bob builds a separate derived class to add stick figures. Which version of the canvas do I instantiate to support both? Maybe this implies the method should be on the shapes, but I can contrive other examples which break that too. (Another approach is monkey patching, which has it's own flaws, and so on.)
There aren't many languages which support multimethods, but overloading functions is sufficient for statically typed cases, and it's appealing (for me) to have the same syntax for "builtin" methods and ones you add yourself:
Other people have mentioned languages with "uniform function call" syntax (where f(a) and a.f() are synonyms). I guess that's ok if you're in the "there's more than one way to do it" camp. I don't think that completely addresses the problem though. I could make more examples, but this post is getting long already.
For what it's worth, I really dislike it when languages implement operator overloading for binary operators as methods on the first object without a way to dispatch to the second object. This makes it very difficult to have your new types play well with the builtin or library provided types. Binary operators really are functions with two arguments.
This whole comment section tells me that almost no one on HN really understands OOP.
OOP is not about nouns. It's about establishing protocols between subsystems. What you're describing are the typical fake "dilemmas" of someone coming from a static, class-oriented programming languages like Java.
Look up Class Responsibility Collaborator exercise.
I'm pretty sure most people who strongly like OOP are sure their particular definition is the correct one. I haven't seen many people agree about what that definition is though. I generally dislike OOP, but perhaps that's because my definition is "encourages implementation inheritance". As you're keen to note, the standard list of other things (encapsulation, message passing, dynamic dispatch, interface inheritance, and even "establishing protocols") are available in a lot of other languages which people don't generally consider OO.
Perhaps I'm ignorant, but I like my way of doing things, and a.f() is less attractive to me than f(a) for all the reasons I listed.
> This whole comment section tells me that almost no one on HN really understands OOP.
Or maybe it tells you that the benefits of OOP are different from what they're theoretically supposed to be. What I read from the thread is that there seems to be significant psychological appeal to some in the noun-verb way, and OOP (perhaps unintentionally) satisifies that need. (And that helps me understand the frequent "Julia needs a thing.do() syntax" complaint on the Julia forums, and why telling them "semantically do(thing) does the same thing" doesn't seem to work.)
This seems to jive with my experience, but I'm trying to formulate a back-of-the-envelope mathematical explanation. Here's my thinking: nouns are in 3D space and verbs are in the 4th dimension (you need the concept of time to have a verb).
By going "noun-verb", you are fixing the first 3 dimensions and leaving one remaining (hence the ~1000x reduction in search space for autocomplete). By going "verb-noun" you are only fixing 1-dimension, leaving the other 3 unknown, and autocomplete would have a larger search space.
Putting it another way. If I started a story "the year was 2015", I've only narrow the one time axis, but the "noun/place" hasn't been narrowed at all, so you still have a sphere of possibilities. However, if I started a story "we were on the Golden Gate bridge", I've pinpointed the 3-dimensions of place/noun, and now the reader only has to pinpoint where on the "time-line" we are. And even that timeline has been shortened to a segment of ~100 years.
Interesting analogy. Only after reading this comment did I realize that my mental model has been somewhat similar. Although if I had to write it down, I'd phrase it in terms of data structures.
If you have a sparse matrix of nouns and verbs where cells show whether a given verb(noun) pairing is valid, indexing by noun is clearly more efficient, because most of the time you know exactly what you're operating on, but not necessarily what the operation you want is named. The variables are likely right there before your eyes, a couple lines above the cursor, but the functions are spread out all over the code base.
This correlates well with human language access times for nouns vs verbs.
"Here, we study naturalistic speech from linguistically and culturally diverse populations from around the world. We show a robust tendency for slower speech before nouns as compared with verbs. Even though verbs may be more complex than nouns, nouns thus appear to require more planning, probably due to the new information they usually represent."
However, if I started a story "we were on the Golden Gate bridge", I've pinpointed the 3-dimensions of place/noun, and now the reader only has to pinpoint where on the "time-line" we are.
How does the reader trying to guess the date narrow the choice of verbs?
You might be on TGGBridge jogging, and the date is irrelevant. It seems like fixing the location to TGGBridge narrows the possible verbs, but it’s still a huge list - “we were on” means you could be doing anything people can do - photographing, touring, jogging, sketching, working as a bridge repair crew or structural engineer or a vehicle breakdown van or an Uber driver or protesting or meeting up in a well known location.
In English, “I was driving..” implies a vehicle, but “a car..” doesn’t imply you were driving it, or necessarily doing anything to/with it at all.
> How does the reader trying to guess the date narrow the choice of verbs?
My thinking is when someone visualizes something they need to visualize the place/noun first and the verb (what happened over time). The first requires 3 dimensions and the latter just one more.
This is exactly the thing that has always bothered me about PowerShell. A lot of PowerShell's effectiveness derives from the number of nouns that Microsoft has built support for, and a defining trait of any given script is usually the set of nouns it operates over, so it's weird to me that all the commands start with the verb.
I've never been able to quite put my finger on why I loath Powershell so much, but yes, I think this is part of it. I also dislike how verbose commands are (and the shortened versions are not safe to use cross-platform), and the weird Pascal-Kebab-Casing-Hybrid of idiomatic Powershell also irks me.
Still, there is something else... I just don't like it.
I could never get used to PowerShell, but in languages like Bash I /really/ like using short flags in the REPL (for speed), but long flags in scripts (for legibility). Unfortunately, this means there's a higher bar to learn and I've never seen a way to enforce this (I rarely see others use this convention). I hoped PowerShell would address this or the ISE (maybe great autocomplete negates the need for short flags?)
Disclosure, I work at MSFT and I have a special place in my heart for PowerShell.
PowerShell provides incredibly flexible syntax, enough that it can satisfy most styles of script writing.
The practice you describe (short names in CLI and long names in scripts) can be achieved. A couple of things to note about how PS binds arguments to function calls:
1. If you omit names, the parameters are bound positionally.
1. If you use names, you do not have to type the full name. It is sufficient to type an unambiguous prefix of the parameter name and the most appropriate parameter will be selected.
So for example in the CLI you can just to `ls -r -fi .csproj` whereas in a script you can use the fully self-documenting `ls -Recurse -Filter .csproj`.
Another commenter complained about short aliases not being safe cross-platform, and this is an example. In powershell core on Unix systems 'ls' is the 'ls' command, not an alias for Get-ChildItem. So I would say the correct "long" version for your example is 'Get-ChildItem -Recurse -Filter .csproj'.
In my opinion this is a non-issue (at least when comparing powershell to other shells) because the number of cases where you have to worry about bash command 'X' actually being different from your system (even across varieties of Unix-y OSes) is much greater than this handful of convenience aliases in powershell.
As I mentioned though, it's not "safe" to use the shortened aliases cross-platform - I've been bitten by that multiple times before, from memory including `ls` and `wget`.
I like your style! IMO scripts and documented examples should use long options.
You can shorten flags in powershell, fyi.
I made the switch to powershell from bash recently (after 20+ years with bash and its antecedents). I didn't love the aesthetic of powershell to begin with, but the advantages are pretty convincing.
Short aliases are "unsafe" across platforms but this is a non-issue since this situation is so much worse for bash, and at least you have long canonical names for things when it counts (i.e. writing scripts). How much bash code has been written over the years to figure out which code path to follow based on which flavor of awk you have or safeguard your function from user aliases for rm?
Thinking there is a "higher bar" to learn powershell vs bash is purely bias due to what you already know. I'm not saying everyone needs powershell and I don't think it's great for everyone but it definitely isn't harder to learn than bash. It's much more discoverable (even if only thanks to actual types and 'gm'). I haven't used ISE at all.
You can shorten the flags in PowerShell simply by writing less of the flag - as long as you write enough to be unique and unambiguous, it will work. Get-ChildItem -R will map to Get-ChildItem -Recurse because there's nothing else it could be.
But if it does clash, you get an error, so really when using this you press tab to demonstrate either that you have a uniquely short string and it autocompletes to the full name, or you press tab and find you don't have a uniquely short string and then tab through until it's the one you want and see the full name.
There are short parameter aliases, e.g. -EA for -ErrorAction which are not unique substrings, and you can define those on your own cmdlets and advanced functions, but they seem less common.
And it would be possible to enforce some of this with a style checker in your own scripts, because you can introspect cmdlets and advanced functions to see what parameters they support, and make sure your invocations always use the full parameter name. I don't know if any checkers do, but VS Code with the PowerShell extension will style check your scripts.
Same here. In theory it could be a great tool and often is but it just doesn’t feel right. I would much prefer if they had done an interpreted version of C# and added the cmdlets to that.
Yep, Powershell is actually really powerful, and even cross-platform with Powershell Core... I wonder if anyone has tried to large-scale "re-skin" it, aliasing command names to make them less verbose and more obvious?
PowerShell Core is quite nice. I actually used it on my Mac to crunch through a few image files and it worked really well compared to my failed attempts of doing the same with bash.
Totally agree. PowerShell basically has no discoverability since you have hundred or probably thousands of commands all starting with “Get”. Much better to start with noun or some kind of namespace. “ActiveDirectory-GetUser” is much better than “Get- ActiveDirectoryUser”. Just one of quite a few bad PowerShell design decisions ....
I know what you mean, but discoverability is the one area where I think Powershell is actually doing pretty well. The Get-Command cmdlet lets you search cmdlets by Module, Verb, Noun, etc, and supports all kinds of wildcards. Plus the built in help (aliased to man) is usually very detailed and provides numerous examples.
I think the bigger issue is that MS stuffs all the modules you don't need into every session, so when you want to do something simple against your DHCP server, you also have to tab past all your AD, SAN, NFS, etc, etc cmdlets in the process, which is just dumb.
Or do Ctrl-Enter to get the list of all matches and navigate with arrow keys, should be faster.
Thinking of it, I don't really need discoverability often but if I did I'd probably resort to
Get-Command | %{$_.Name} | fzf
bound to a key or so: this gives you fuzzy matching search of all possible commands. I just tried, 1683 commands in total and this got me all Smb commands in about the amount of time it took me to type 'SMb. It doesn't get better than that. Well, except that you could have fzf split the screen and show the help for each command there.
This is also the typical PS (or other shells for that matter) story: people complaining about not being able to do discoverability whle in fact the there are actually some seriously good options but it's those options which are hard to discover :)
I don't recognise fzf as a tool, but is that better than
get-command *smb*
for fuzzy matching? Incidentally you can do
get-command |% Name
To avoid having to write that {$_.} wrapping blob all the time, if you're merely expanding a property or calling a method on every object in the pipeline.
”some seriously good options but it's those options which are hard to discover :)”
That’s modern UX for you. Hide features to the degree that only a lucky few will ever find them :). A lot of websites and mobile apps have adopted that philosophy.
Yes it is really cumbersome to use. Set-<tab> is pretty useless since you have to tab through dozens of entries if you don't know the noun you are looking for.
I don’t know that Verb-Noun is great design but I’m not convinced Noun-Verb would be any better; take your example and get rid of Set- and simply press <tab> right from the start - you have exactly the same problem of dozens of entries if you don’t know the noun you are looking for, changing the order hasn’t improved that, has it?
(In reality it’s slightly less clear because tab would then have to show you other file names, executables, batch files, etc. names, at least Set- narrows it down to cmdlets to do with setting).
But as other comments have pointed out, Get-Command let's you search by module, noun, or verb. It's certainly not the most intuitive if you're used to normal linear autocomplete, but once you get used to it I find it works well.
This article has cast light upon the fact for me that this is actually what bothers me about CLIs too; they're useful once you know what can be done, but discoverability is basically "RTFM and good luck."
Isn't the manual for discovery? Or do you try to use tools w/o instructions? I don't see what the issue is here, reading docs seems like an obvious first step for anything new
GUIs often encourage starting the program, right-clicking on anything, and the menus can give you a decent educated guess at what you can do to that thing. Bonus points if hovering over a menu item pops out an additional summary banner with some more details of the meaning of that menu item.
CLIs require matching the flags to the direct objects they apply to and carrying that mapping around in your head (and the command will just fail if you try and apply the wrong flag to the wrong object).
> it's weird to me that all the commands start with the verb.
It's not really weird, (verb)-(direct object)-[(indirect object)] with the subject implied in yh direction of the command is a very common structure for imperative commands in natural language. Given that PS largely targets operators that are not (and even moreso were not when PS was developed) general coders, following natural language imperative patterns makes a lot of sense.
There probably is somewhere a natural language that uses object-verb in the imperative, but o haven't heard of one.
Powershell was one of the first scripting languages I actually learned, so I'm a little more forgiving, but I think you've summed up one of the more annoying issues. It seems as Powershell has grown, Microsoft has tried to stuff every module into every session (except the ones you actually need, those you'll inevitably have to import). If Powershell didn't auto import ~30-50 modules, each with 20-200 cmdlets all sharing the same Verb-Noun structure it wouldn't be so bad...except then Windows makes it worse, by making it difficult to limit those modules yourself, and most major updates will notice that you've moved modules around and will happily undo all that work and re-add them.
Its still a nice little scripting language, and its awesome when you want to just avoid some Windows GUI nonsense. Plus Powershell's interactive nature makes learning it significantly easier than some of the more complicated languages I've picked up since then, although its getting harder and harder to excuse Microsoft's poor design choices around it.
The article is more about user experience than English language nuances.
Take one of your examples, you're much more likely to know you can "board" once you know there's a "ship." Without the ship as context first, only people who are already familiar with ships know that boarding is even an option.
This provides a poor experience in terms of discoverability.
The article clarifies the difference is interesting in thinking about how these orderings simplify autocorrection. It's less about spoken language (I'm unaware of any research on whether there's anything analogous to IDE autocomplete in human cognition, or even what form that would take) and more about how the order in which data is presented to an interactive editor changes how useful the assumptions the editor makes about context can be.
I used to prefer car.drive(), now I prefer drive(car).
I don't buy the auto-completion argument. Can my IDE really tell me all the ways I can use an Integer? I don't think so.
When it hit home for me was when Java 8 lacked Stream.takeWhile(predicate) and I decided to implement it for myself. I could implement takeWhile(predicate, stream), I could not implement stream.takeWhile(predicate).
Python doesn't have such a function on the int type, so it will naturally wont show it.
The IDE (or, here the "dir" method) shows everything _available_ to use as a method on an integer.
If you meant "but can it show functions that are not on the integer type but that accept integers?" -- then many IDEs cannot, but it's not a difficult feature to add in an IDE for known libs/modules.
Can't think of any other way a question like "Can my IDE really tell me all the ways I can use an Integer?" makes sense. Of course the IDE wont know about yet unwritten methods which still count as "ways to use an integer". But nobody claimed it can above, and the absence of that doesn't discard autocompletion as an argument (to using the noun first).
Some languages (eg. Swift, Rust, Kotlin; Dart is working on it) allow you to implement "extension methods" separately from the actual type definitions, even when the type in question is from another module/library. It's as if they're codifying the view that "OOP" is all a matter of syntactic convenience, and can't help us with the really hard problems.
Protocols (the Swift name) are an attempt to give greater support to an OOP language in solving the Expression Problem [0] more generally. OOP does great for certain classes of problems all by itself, but it is not enough for everything. Similarly, FP is great for certain classes of problems, but is not enough for everything.
I think protocols are neat, but I'm uncertain whether their usefulness outweighs the added mental burden of keeping track of them all (since they can be defined anywhere and take effect everywhere).
That's not a big argument, as a process doesn't have to be "the most" time consuming part of programming to have a big negative impact.
Annoying context switches and slowdowns to lookup function naming and possible options (without autocomplete) can still interfere with the flow of a programmer as he writes a program...
To test that this argument doesn't work, consider introducing a 400 ms delay on your keystrokes. Writing will still be the least time consuming task. Then compare with code you write without the delay. I have a reasonable hypothesis how the test will turn out.
At Onshape we're considering doing this for our language FeatureScript[0], as discussed here[1]
This would make the call e.g. v1->cross(v2) syntactic sugar for the call cross(v1, v2).
The main reason it's not just an obvious win for autocomplete and readability is that it provides for multiple ways to do one thing, and is one more thing you need to know when learning this new language. Rather than having some code call plane->yAxis() and other code call yAxis(plane), it might be nicer to have only one way.
The other counterargument: the OOP-like syntax could support the expectation that polymorphism works like e.g. Java, where e.g. v1.cross(v2) would only considers the type of v1 when deciding which overload to call. FeatureScript has multiple dispatch and will consider the types of all arguments when choosing an overload.
All that said, the change it seems like a win overall, but we're waiting for the language to mature more to see if clearer arguments emerge.
Hang on. Is no-one else here just blown away by the fact that the guy lucked into meeting Garriot & Watson? Man, I'd have given my right arm for that as a kid.
The part of the article talking about dropping to one verb ("use") made me think of the same thing ("execute").
The benefit to verb/noun (or noun.verb) is getting BOTH. If you effectively drop the verb, you lose out on half the communication.
To use another comment, you may not want to quaff a ladder, but it is a bad idea to assume you just "ladder", and if assume too narrow a field it leads to frustrations. (In programming or in games. Back when I looked into text adventure games (interactive fiction) there was a truism that "use" was a _bad_ verb)
If you ARE wide enough....then you have little difference between noun verb and verb noun except for the effort required.
Honestly, I was more interested in the first part of the article, that talked about checking the state if the world rather than a sequence if steps.
The flip side of this, that they don't mention, is that this asymmetry exists only if you design your interfaces in FP / OO languages differently. Most OO designs I've seen have a large number of classes, each with very specific methods. Most FP designs I've seen have few (or even zero) custom types, and a large number of methods can apply to a large number of types.
As Alan Perlis said back in 1982:
> "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."
When I'm writing in a FP language, it's true I don't get much autocomplete help, but the upside is that I've got 100 core functions that operate on almost anything. Even if you flipped around the order of the syntax on screen, autocomplete wouldn't help much, because you can apply any verb to any noun. I rarely have to convert between types, and when I do it's a couple words at most.
When I'm writing in an OO language, with typical class libraries and frameworks, I get a lot of help from autocomplete, because once you know the "type", there's only a few things you can do with it. The corollary is that I usually spend half my program converting from the type I have to the type that I need.
(Think about English. We could simplify the language by eliminating words like "eat", and just say "I'm going to go use lunch", but then we wouldn't have the word "eat" available for any other nouns, even where it'd be helpful and clear.)
> It is better for programmers if they can choose from two medium length lists than to have to choose from a very long list (where a lot has to be typed before it's useful) and then a very short list (where not much is gained).
This doesn't tell the whole story. As they say, you write code once, but read it many times. I'm perfectly OK with giving up autocomplete (for writing) if it means I don't have to spend twice as many lines of code (for writing and reading) to convert an X to a Y just so I can call f() on it.
The more generic the functions, the less code I have to write, and having less code has huge benefits across the board -- for writing, reading, debugging, testing, performance, and so on. Autocomplete can be nice, but it's not nice enough to want to sacrifice everything else.
Note that not everyone FP lang is verb-noun. Sure, most are, but your F#/Elixir/Scala's are more often than not written in a noun-verb form with the `|>` operator or something vaguely equivalent (syntax extensions in Scala's case).
I think what the topic gets towards is that if you view all possible actions as a multi dimensional array of (all nouns)*(all verbs), it makes sense to first choose the one of (nouns, verbs) that is initially the smaller set
verb-object / object-verb is more specific in the context of this story, yes. And indeed, when you introduce indirect objects or other fanciness, additional dimensions of complexity can be introduced (GUIs, for instance, can have ambiguity in terms of whether object-verb-indirect_object is represente by click-command-click or modclick-modclick-command).
> Something I noticed at the time was that the syntax for functional languages tends to be verb then noun: f(x), whereas the syntax for object oriented languages tends to be noun then verb: x.f().... There's a big difference in usability though: auto-complete.
Auto-complete also really helps with discoverability. Consider checking for the presence of a key in a map/dictionary. In Java, auto-complete will quickly lead you to `map.containskey(key)`. In Python, though, you'd have to know that the syntax is `if value in dict`.
Now, let's check for the presence of a value. Auto-complete again quickly leads you to `map.containsValue(value)`. In Python, Google tells me that it's `if value in dict.values()`, which seems more difficult to stumble upon.
And that's for a built-in data structure. A lot of my job involves trudging through other people's code, trying to figure out how they architect their ball of cats. Auto-complete is a great tool for that; it lets you quickly and easily poke around, and see what the various nouns in the system can do.
Actually FP is more often verb-verb as in compose these verby thing together in this way to make a new verb then finally give it a starting noun.
This is an important point. When done well, in FP the data "disappears", i.e. the types control flow and the functions talk about what you're doing. So there's no "find me what I c an do with this object" instead it's "What am I returning?" It sounds the same but it isn't.
Yes, way in the heart of a functional program there's some code sorting lists of ints or something. But by that time, it's all labeled to the point where the functions just tell you what happens. As someone browsing the source, the nouns disappear.
One of the things I like about Java's streams is that it makes the verb-chaining more obvious, at least to me.
list.stream().filter().mapToInt().sum() just reads better to me than sum( mapToInt( filter(list) ) )
I don't enjoy balancing parentheses in Python so I tried to bring Clojure's threading macro into the language...
Which allows left-to-right style programming I haven't put this into production code yet, but love pipelining scripts with doAlso see PyToolz's implementation: https://toolz.readthedocs.io/en/latest/api.html#toolz.functo...
They differentiate thread first/last for Fns with arity > 1, just like Clojure.
I made a thing[0] just like that to do pipelined async db queries:
If you assign the intermediates to variables, each instance will have associated with it the related collection that's queried while avoiding N+1's.[0] https://github.com/karmakaze/safeql
I know most will disagree but I find code the above hard to read. I'd much prefer
Or some such thing. The more complex the chain the harder it is for me to read. Does anyone else have that experience?There's a spectrum here. Someone even more explicit than you may insist that `average` should be split up into explicit bindings for the mapping to Int, the sum, and the count.
Also, for non-toy examples, there are real benefits to composing transformations, so that you don't pay the memory and performance costs of assigning intermediate sequences.
You're not alone; those long chains are annoying to decipher. The other benefit of what you wrote is that if you have to track down a bug in the chain, you can actually examine the intermediate variables directly instead of having to pick apart a long chain so that a variable is exposed for you to look at in the debugger.
I agree. Naming parts of your code is important. An alternative to named local vals is to either use named functions instead of lambdas: people.filter(olderThan50) or (I use Kotlin) use named extension functions: people.countOlderThan50()
In any nontrivial application, doing this for many of your types, does this not bloat your API considerably, which adds to the cognitive load?
Yeah I much prefer that style because it reads left to right like I naturally read as an English speaker. It flows more naturally from the original object and the changes applied to it.
Agreed. Some functional languages (like F#) have a pipelining operator |> that gives the same syntactic impression:
I have found myself using pipes many times in Elm, because they make the code so much more readable. Now I understand why!
I haven't seen an autocomplete that works on pipes. But with a good type system, it is technically possible to list all compatible functions that take "noun" as first parameter.
Sometimes you supply partially applied functions (also with pipes in noun->verb order) and that's not so easy to match:
noun |> verb1 |> (otherNoun >> verb2) |> etc
But perhaps even functions that take "noun" as nth parameter could be suggested.
And other languages, such as Forth; don't need one.
Huh. I would've expected it to be `odd filter`, not `filter odd`.
Interestingly enough, the now standard data manipulation package in R, dplyr, uses pipelines.
Good to learn me some F#, looks like Elixir.
If verbs from the object flowing you want, "That style to me is preferable" you should say. (Style is the object, preferring it is the verb)
English reads left to right overall, but writing instructions in English does not flow smoothly either way. e.g. "shuffle a deck of playing cards then deal four" reads more naturally than either "a deck of playing cards shuffle then four cards deal" or "deal four cards after shuffling a deck of playing cards".
shuffle(cards).deal(4) is a mix of both approaches.
Yes yes, english doesn't follow this noun-verb order but that's not what I'm actually talking about.
Usually the languages either encourage cards.shuffle().deal(4) or deal(shuffle(cards),4).
In the 'noun-verb'/OOP version the sequence of modifications follows the english reading order where basically I only have to keep information about the last result in my head. The functional verb-noun version I have to pop in and out of layers: 'ok we're dealing what?, something shuffled, what are we shuffling?, the cards object/variable, ok we're dealing shuffled cards how many are we dealing? 4.'
In general the noun-verb follows the sequence of operations applied by the computer so it's easier to read.
That's where I was going as well, LISP style and APL style claims "reads left to right" but having to build up a stack of buffered work which you can only unwind once you get to the end, is annoying and unhelpful and limiting.
But simply turning it around to cards.shuffle().deal(4) isn't a good answer, it's still the case that you can only put a small number of things together in a chain because it stops making sense. If the next move was to start the game, "cards.shuffle().deal(4).startgame('some-card-game')" does not make sense because starting the game is not something the cards do, but "start(deal(shuffle(cards), 4), 'some-card-game')" can make sense because start is a function which takes a board state and a game to start. It describes the world state, not the cards and their abilities.
That is, neither style is right, but a mixed style where both approaches are available and you can mix and match to be more expressive, works much better, IMO. Chunking a small number of things with prefix into one operation, or with postfix into one operation, but combining those chunks flexibly at a larger scale.
the main point was about auto-completion and usability and discoverability. fluent api for the win.
English is worse as it is SVO. So a more English like language would be all infix operators. I thunk part of ambiguity is that English can be used in both modes.
“Say that” “He said”
I was really just talking about the left to right order of modification not the actual order of English sentences. Apparently I was very unclear because that's all the responses I'm getting.
Yeah, Lisp-y parenthesis syntax isn't the best for that, but Ruby and CoffeeScript express it reasonably well: `sum mapToInt filter list`
Which is basically just a sequence of verb-nouns.
> Auto-complete also really helps with discoverability. Consider checking for the presence of a key in a map/dictionary. In Java, auto-complete will quickly lead you to `map.containskey(key)`. In Python, though, you'd have to know that the syntax is `if value in dict`.
This isn't an argument for noun-verb over verb-noun, though it's an argument for more homogeneous syntax. `if value in dict` isn't noun-verb OR verb-noun.
There's not really a reason that f(<tab> couldn't complete with arguments (and it does in many systems). I think if you wanted to use autocomplete to make a point, you'd have to argue that it's more effective somehow to complete on verbs than on nouns.
I also think that the object-oriented syntax matches my thinking process more closely: for example, I have an array of strings, and I want to convert them to numbers, pick only the odd ones then sum them rather than "I want to sum some things…things which I'm filtering by parity…which are actually strings I'm converting to integers".
That's how the thread-last/first macros help in Clojure. You sort of specify a pipeline:
``` (-> list parseInt isOdd sum) ```
Is the same as
``` (sum(isOdd(parseInt(list))) ```
Particularly useful if you're pulling a deeply nested key out.
I think that can be solved without bringing objects into the mix -- functions are usually grouped up into modules of some kind, so leading with `Foo.` (or `foo::` in C++, etc.) still provides a useful level of scoping that can aid autocomplete.
On the other hand, tools like Hoogle [0] let you search for functions based on type, so you can search for e.g. `ByteString -> _` to find functions that take a bytestring. There's no reason the same paradigm can't be applied in an IDE.
+1 for “ball of cats”. That’s a new one for me.
It's real great to know that I can get and set all the fields. Thanks previous programmer.
This is one of those paradigms that I don't understand in Java. It's the most trivial of trivial encapsulations.
It is (was?) the primary way to maintain binary compatibility between versions of class files, where switching an attribute to a method (if logic such as verification needed to be added in a future version) would break that compatibility.
Setters I get (see what I did there?). It's the superfluous getters that simply return the values I'll never understand.
See my cousin comment - it maintains compatibility when changes are internal to the class. For example, if a field was switched from concrete to derived, you'd have to switch the attribute to a getter, which would break compatibility with the previous version.
But exactly because it's easier to "stumble upon" the right answer, I think this may be worse for games. I like feeling as if it's necessary to come up with the solution in my head and then do it. It's not satisfying if every puzzle can be quickly solved by selecting each item in the room and trying the couple actions the game offers you for that item.
The Python approach also exposes unnecessary details and by doing so causes a performance hit by forcing you to get a list of keys/values and then find out of what you’re looking for is in them. This is slower than what the dictionary could do internally: hash the key and check if there’s an entry for it in the backing store.
The .values() method does not return a list, it returns a view object. If you write ‘value in m.values()’, it invokes the __contains__() method.
(Note that this cant be a hash check because it’s not testing the presence of a key.)
(Also note that the story is different if you look at sufficiently archaic versions of Python.)
This is one of the reasons I like ruby better than python... none of the magic 'dunder' methods... if you want to define a custom adder for your class, so "foo + bar" works, you do "def +()" in ruby instead of "def __add__ ()" in Python.... I love that ruby just uses the actual operator instead of some arbitrary method name you have to remember.
> none of the magic 'dunder' methods
Ruby do have equivalents to Python magic methods... disguised as normal methods. For example Python `__hash__` is Ruby `hash` and both are used by built-in dict/hash values and you should be aware of that fact in both. There are some philosophical differences as well, Python likes to have standalone functions that are customized via magic methods (e.g. `str` vs. `__str__`, you don't normally call the latter) and in Ruby everything is method (e.g. `to_s`). It really seems like a matter of taste.
Fair enough!
I remember doing some research on language for a psychology course in college and I think it's important to warn hacker news that the expieriences people have with nouns being more intuitive and fundamental is absolutely not a cultural universal. Westerners place unusual emphasis on nouns and while it may be that "get" is less specific than "item_id" in English, it may be more be the opposite if you speak a language that has specific and concrete verbs and abstract nouns. This is one of those times where your intuition about what is logical and obvious might be wrong.
Kadir beneath Mo Moteh.
Speaking of which, I wonder whether we'll program in Tamarian one day. Whether we'll be able to build functioning software from a really high-level abstractions, and perhaps not try to micromanage it too much.
(And then when something goes wrong, the program will simply say, "Shaka, when the walls fell.")
You maybe be interested in the Petrovich language: http://www.dangermouse.net/esoteric/petrovich.html
> vim's commands like d0 are verb then text selection (noun), whereas in more conventional text editors (including Emacs) you'd first select some text (the noun) and then invoke a verb like delete.
I would say vim is also noun-verb as you can select text and then tell it to perform an action on it, but supports convenience methods for verb-noun, or just verb. It's just that most the verbs also have a default selection they apply to as well, whether it be a character, a line, or some larger block of content. Thus, 'x' deletes one character, but 10x deletes 10 characters, using 'v' to select text and then x deletes the selected characters. 'dd' deletes the default (current) line, '10dd' (or 'd10d' to mix it up) deletes the next 10 lines, using 'v' to select a range of lines and then 'd' deletes those lines. Additionally, you can just define a range of lines to apply a command do in command mode: 10,20d deletes lines 10-20.
I think vim makes more sense if you think of it like a Forth. In a general sense, the entire language is noun*verb (zero or more nouns followed by a verb).
I know what you're going to say, "dw is a verb noun!" but really, it's not. w is verb--it's a function that, in this case, takes the d function and applies it to a word. The fact that it is a verb is proven by the fact that the expression executes when you type the verb. It's unfortunate that the vim verb "w" corresponds to the English noun "word", but that's probably the best that could be done given there are only so many keys on the keyboard.
This gives words to a frustration that I've had with many roguelikes. I don't want to /drink/ -> /ladder/, I want to do ladder things with the ladder, which should be a very short list. Freedom does not great game design make (by default).
The problem with applying this to some roguelikes, such as Nethack, is that part of the fun is finding out what you can do. Getting a handy contextual menu when you find a sink, or an alter, or whatever makes the game easier and less confusing, but also takes away a wonderful aspect of it when you learn or hear about some new crazy aspect of it.
That's not to say it doesn't have a place in roguelikes, just that each game needs to carefully consider what it brings to the table, and also what it cuts out.
> takes away a wonderful aspect of it when you learn or hear about some new crazy aspect of it.
It might be useful to think about this in term of how many players it affects.
My opinion is that for each seasoned roguelike player who enjoys this "wonderful" aspect, there will be multiple roguelike newbies who will be discouraged by the unfriendly UX and just leave.
That encapsulates what happened in the transition from text parser (keyboard-driven) games to mouse-driven games: discoverability went up, complexity went down; the first generation of gamers lost interest and a second larger generation of gamers came into being.
Not everyone lost interest... I loved keyboard driven games, and I love mouse driven games, too.
I actually find the opposite to be true. I enjoyed not having a clear idea of what everything did when I was just starting out. Once I knew what I could do with most things it became annoying that I didn't have an easier interface because there wasn't potential in the complexity anymore
There's nothing saying that one should prefer wider audience to a smaller one. Ultimately, the quest of appealing to the lowest common denominator lowers a ceiling for possible enjoyment.
> My opinion is that for each seasoned roguelike player who enjoys this "wonderful" aspect, there will be multiple roguelike newbies who will be discouraged by the unfriendly UX
This is a strange viewpoint. This "wonderful" aspect of NetHack applies solely to newbies; seasoned players already know what they can do.
At first blush it might seem that simplifying the interface loses this element. But really as you allude to having "fun finding out what you can do" is just a design goal that can be accomplished even with a simpler interface.
For example crafting systems or other forms of modifiers. I combine water with my sticks and get wet sticks then use those to make a camp fire which makes it extra smokey. Which leads to more introducing more systemic interactions and the beauty of those is that systems that interact have great potential for emergent behavior.
Simplifying Nethack's interface may well make it a worse game but it's not the case that a simpler interface in another game implies losing out on the fun of discovery.
> Simplifying Nethack's interface may well make it a worse game but it's not the case that a simpler interface in another game implies losing out on the fun of discovery.
Sure. That's what I was trying to say in the second paragraph. It's not a matter one being better than the other in general, or even for roguelikes. It's that there are things to consider about how you interact with the system in every case, so it deserves some attention.
For Nethack, I think the game is better for the interface not giving you clues what you can and cannot do. For other games, that likely isn't the case (very few support both the breadth of unique actions and make discovering those part of the draw of the game).
I think you can have both. You could start out not knowing what you can do to any noun and through attempting verbs->noun you can build a library of options. Once you know you can apply a verb to a noun that "unlocks" the noun->verb menu.
That would retain the "discovery" aspect of the game while also providing the convenience. You could even spin it into a game mechanic with "confusion" (randomizing your noun->verb options) and "amnesia" (erasing your noun->verb options).
If we weren’t talking about a game, I would agree with you. It’s frustrating not knowing what you can do with what. But I like NetHack‘s verb-first approach. If you had a noun-first approach then it would basically list every single action for every object anyway, because it makes sense. Yes, you can eat a cream pie or wield it as a weapon or throw it at an enemy. You can use a towel to wipe your face or wear it as a blindfold or wield it as a weapon.
The big drawback of noun-first is that it makes the player feel less creative because there’s no discovery: every object lists what it can do on the label. That’s no fun!
I like engraving with my wands and dipping weapons in holy water! I like polymorphing into a metallivore and eating metal rings to gain their properties! I like dipping one potion into another to create weird effects via alchemy!
I've been working on a roguelike on and off for (checks watch) about twenty years now.
When I started I was an avid Angband player. I had just gotten my first laptop computer and was frustrated by how difficult it was to play using the limited keyboard without a full numeric keypad. The main problem is that the default Angband keyset uses up almost all of the letters for all of the different verbs: quaff, read, cast, pray, wave, aim, throw, etc. (There is an alternate keyset based on Vi, but I could never internalize the "arrow keys".)
It was especially frustrating because, like Amit notes here, most verbs only apply to a few items. You can't read a potion or quaff a scroll, so allocating two separate keys to those actions is redundant. So in a fit of pique, I decided I would make my own roguelike with a single "use" command that could use all kinds of items.
I did learn something interesting about usability in the process. One nice feature of Angband's keyset is that it's harder to accidentally use the wrong item. If you intend to quaff a potion but accidentally pick an inventory slot containing a scroll, nothing happens. The specific verb commands act like a sort of redundancy check for the operation. But, overall, I think having a single use command is better.
This is orthogonal to whether the verb or noun comes first. I've just reduces the number of verbs by collapsing many of them into a single multi-purpose "use". I've gone through several iterations of the UI for the game and I'm still on the fence as to whether it makes more sense to select the item or the operation first. It's a little tricky in a roguelike because you usually need to select the item from something: either your inventory, equipment, or on the ground.
So if you want to drink a potion from your backpack, it could be any of:
- Use -> inventory -> potion
- Inventory -> use -> potion
- Inventory -> potion -> use
The first option is good most of the time because the player does know what action they want to perform. But the other two are good because they give the UI a chance to show the player the inventory before they make a selection. The first option feels like a stab in the dark where if you don't know what's in your inventory, you don't know if you have anything to use in the first place.
Of course, if the UI always passively shows the inventory, that problem goes away. So the visual design affects the order that operations might make the most sense. It's a hard problem.
> You can't read a potion or quaff a scroll, so allocating two separate keys to those actions is redundant.
I'm a longtime NetHack player who switched to Crawl, and I've been thinking a lot about the differences between the two games.
One neat NetHack feature is that objects often do have unexpected uses (as well as uses in combination with one another), and at least in a few cases you can make these uses by applying an unusual action to an object. The first example that comes to mind is that when polymorphed into a different monster, you may be able to eat things that your human form couldn't, sometimes with especially desirable (or undesirable) effects. NetHack players probably appreciate on the whole that the game doesn't actively suggest this possibility to them and that they have to think of it or try it to see what will happen.
Another example is that there are a couple of things that can be used as weapons to good effect that are not obviously weapons, so the ability to wield arbitrary objects is important there.
On the other hand, NetHack uses these possibilities as a source of humor and challenge to the player (partly to create a slightly more open-world feeling, typified by the saying that "the dev team thinks of everything", and partly to increase the amount of knowledge that a player can master and bring to bear on the game). Crawl has a very different philosophy and the actions available to the player are, compared to NetHack, more straightforward and obvious in their implications.
I think it is a matter of critical mass. Crawl had so few instances of "you need to be spoiled or try random stuff to get this" that pressure got them removed (although always controversial). Nethack lives and breathes it, and it has so many that you are likely to feel the benefits of this whimsy
I really like that explanation!
> NetHack players probably appreciate on the whole that the game doesn't actively suggest this possibility to them and that they have to think of it or try it to see what will happen.
I did try in explore mode to see whether the game prompts you with unusually-edible items when you're polymorphed, and it actually does, so it's a slightly less hidden feature than I was thinking.
"This whistle is delicious!"
Yes, I love the idea of being able to apply multiple verbs to the same item. In my game, you can use, drop, and throw things.
The problem with Angband is that the verbs are mostly disjoint. The only thing you can quaff is a potion, the only thing you can read is a scroll.
Part of my motivation for collapsing all of those "use" verbs to a single use command is because it frees up opportunities to add new operations that can be applied to a range of item types. It frees up keyboard space.
> This is orthogonal to whether the verb or noun comes first. I've just reduces the number of verbs by collapsing many of them into a single multi-purpose "use".
Right! When reading the article I had a bit of a feeling of hesitancy: the ubiquity and general-purpose-ness of functions in FP is the whole point!
Creating a single "use" is like using polymorphism to write a generic function, in a sense. You might imagine `use :: a -> Action` or something. As opposed to the article's take, which calls for `readScroll :: Scroll -> Action` and `openDoor :: Door -> Action`, etc.
> The main problem is that the default Angband keyset uses up almost all of the letters for all of the different verbs: quaff, read, cast, pray, wave, aim, throw, etc. (There is an alternate keyset based on Vi, but I could never internalize the "arrow keys".)
I don't understand how most of the keys being occupied can be the "main problem", if you're not able to use those keys directionally anyway.
Without a separate numeric keypad, you need to find some keys to map to the eight cardinal directions. Diagonal movement is key to the game, so you can't just use the actual arrow keys. So, ideally, I wanted a 3x3 rough square of keys on the main keyboard area that I could allocate for movement.
In Angband, most of the letters and punctuation on the main keyboard area are already allocated. You can rebind them, but since the different items need different commands, I don't think that there was any way to collapse multiple verbs onto the same key.
No, but the existence of the roguelike keyset demonstrates that there is space on the keyboard for every command. Why do you need to collapse multiple verbs onto the same key?
IMO that's mostly historical due to the input devices available.
Terminal games were played with a keyboard, so feature creep resulted in every damn key on keyboard being used.
Most games slowly evolved away from that when joysticks and mouses became the dominant input devices (although I remember flight simulators still using most of the keyboard) but there were lots of games were people were still using the keyboard because it was faster (like FPS games).
Nowadays, with touch screens, using a keyboard is no sensible option and you naturally land a "first select the object then what you want to do with it".
But with roguelikes, you still often have a Verb-Object interface with keyboard shortcuts unless you play roguelikes that were designed from the start with a mobile interface in mind.
IIRC, this was one of the breakthroughs of the XEROX PARC virtual desktop research: the realization of how universal the pattern "select context, then use menus or keyboard accelerators to select verbs applicable to that context" could be.
Surprisingly the Xerox Star GUI was verb-noun and the Apple Mac of the same era was noun-verb. There were (of course) exceptions.
I started with verb-noun on the Star then migrated over to the noun-verb on the mac. Noun-verb seemed immediately better.
I wonder if XEROX PARC virtual desktop research came before or after the Star?
> the Apple Mac of the same era was noun-verb
As I recall the standard top-level menus were “File, Edit, Options, Help”
On the other hand, you really start appreciating 'D'rink, 'W'ear, 'E'at once you make it a bit further (progress in a dungeon) and get some experience with the game. It just saves time.
With a system based on 'U'se, you're trading using fewer keyboard keys for more keypresses. You need to prefix most of actions with 'U'. Even Crawl lets you carry a-z items. When the list of items grows big, you either lose time scrolling through it to find a potion, or need to press and extra key to filter potions. In Crawl, when you press 'e', it only shows you edible items.
And for people so keen to compare with natural languages, we don't say use a sandwich, use flour or use paperclip. Natural languages could get away with just having few verbs, but you would have to make communication longer to clarify. Incidentally this is how English looks for a non-native english speaker. While words may be shorter on average, you need to string more of them together to get the same meaning. An extreme case of this is Toki Pona, which by design has 120 words for you to combine. An intentionally simple language based on belief simplicity leads to happiness.
The bottom line: over-reliance on few keys means the interface is optimized for low learning curve, not for long term use. Think Nano vs Vim.
The bottom line 2: the article sounds like it's written by a new convert. Noun first has merits and areas where it's better, but it's not something that cures cancer.
I think you're confusing game design with learning curve.
Computer games have evolved towards minimal learning curve, because it sells more games. Back when computers were mostly used by nerds, those people weren't bothered by having to read long manuals. The average computer user today IS bothered.
This is in stark contrast with board games. Board gamers must fully know rules to know how to play. When you play a board game, you have a higher initial learning curve, but once you do know, you immediately perform actions you want to make. In a typical modern computer game, you'd have to go through a sequence of menus.
BTW this is the definition of a game I subscribe to. A game is a set of rules. By this definition, most Call of Duty games are the same game.
One of the things I dislike about the OO noun.verb() syntax is that frequently you have multiple nouns. For example in some graphics systems, you've got a canvas to draw on, a pen which knows styles for line thickness or dashes, a brush for how to fill, and a shape to be drawn. Who owns the verb? They kind of all do:
The other issue I have is in closed object systems where you can't add new methods to classes in the library. Say I want to add a new type of shape, maybe an emoji face composed of several circles. Depending which class owns the methods, my new smiley method stands out from the other shapes which are provided by the library, and I think the lack of symmetry is ugly: For simple cases, inheritance solves this, but then you get into worse trouble when Alice builds a derived class to support smileys, and Bob builds a separate derived class to add stick figures. Which version of the canvas do I instantiate to support both? Maybe this implies the method should be on the shapes, but I can contrive other examples which break that too. (Another approach is monkey patching, which has it's own flaws, and so on.)There aren't many languages which support multimethods, but overloading functions is sufficient for statically typed cases, and it's appealing (for me) to have the same syntax for "builtin" methods and ones you add yourself:
Other people have mentioned languages with "uniform function call" syntax (where f(a) and a.f() are synonyms). I guess that's ok if you're in the "there's more than one way to do it" camp. I don't think that completely addresses the problem though. I could make more examples, but this post is getting long already.For what it's worth, I really dislike it when languages implement operator overloading for binary operators as methods on the first object without a way to dispatch to the second object. This makes it very difficult to have your new types play well with the builtin or library provided types. Binary operators really are functions with two arguments.
This whole comment section tells me that almost no one on HN really understands OOP.
OOP is not about nouns. It's about establishing protocols between subsystems. What you're describing are the typical fake "dilemmas" of someone coming from a static, class-oriented programming languages like Java.
Look up Class Responsibility Collaborator exercise.
I looked it up and while worthwhile, I would need someone to explain how it applies to this specific comment section.
> It's about establishing protocols between subsystems.
Is that what it's about? I try to establish protocols between subsystems without OOP quite often. Are you referring to message passing specifically?
> Is that what it's about?
I'm pretty sure most people who strongly like OOP are sure their particular definition is the correct one. I haven't seen many people agree about what that definition is though. I generally dislike OOP, but perhaps that's because my definition is "encourages implementation inheritance". As you're keen to note, the standard list of other things (encapsulation, message passing, dynamic dispatch, interface inheritance, and even "establishing protocols") are available in a lot of other languages which people don't generally consider OO.
Perhaps I'm ignorant, but I like my way of doing things, and a.f() is less attractive to me than f(a) for all the reasons I listed.
> This whole comment section tells me that almost no one on HN really understands OOP.
Or maybe it tells you that the benefits of OOP are different from what they're theoretically supposed to be. What I read from the thread is that there seems to be significant psychological appeal to some in the noun-verb way, and OOP (perhaps unintentionally) satisifies that need. (And that helps me understand the frequent "Julia needs a thing.do() syntax" complaint on the Julia forums, and why telling them "semantically do(thing) does the same thing" doesn't seem to work.)
One approach here might be to take a cue from Forth and other concatenative languages and use postfix notation:
This seems to jive with my experience, but I'm trying to formulate a back-of-the-envelope mathematical explanation. Here's my thinking: nouns are in 3D space and verbs are in the 4th dimension (you need the concept of time to have a verb).
By going "noun-verb", you are fixing the first 3 dimensions and leaving one remaining (hence the ~1000x reduction in search space for autocomplete). By going "verb-noun" you are only fixing 1-dimension, leaving the other 3 unknown, and autocomplete would have a larger search space.
Putting it another way. If I started a story "the year was 2015", I've only narrow the one time axis, but the "noun/place" hasn't been narrowed at all, so you still have a sphere of possibilities. However, if I started a story "we were on the Golden Gate bridge", I've pinpointed the 3-dimensions of place/noun, and now the reader only has to pinpoint where on the "time-line" we are. And even that timeline has been shortened to a segment of ~100 years.
Interesting analogy. Only after reading this comment did I realize that my mental model has been somewhat similar. Although if I had to write it down, I'd phrase it in terms of data structures.
If you have a sparse matrix of nouns and verbs where cells show whether a given verb(noun) pairing is valid, indexing by noun is clearly more efficient, because most of the time you know exactly what you're operating on, but not necessarily what the operation you want is named. The variables are likely right there before your eyes, a couple lines above the cursor, but the functions are spread out all over the code base.
This correlates well with human language access times for nouns vs verbs.
"Here, we study naturalistic speech from linguistically and culturally diverse populations from around the world. We show a robust tendency for slower speech before nouns as compared with verbs. Even though verbs may be more complex than nouns, nouns thus appear to require more planning, probably due to the new information they usually represent."
https://www.pnas.org/content/115/22/5720
However, if I started a story "we were on the Golden Gate bridge", I've pinpointed the 3-dimensions of place/noun, and now the reader only has to pinpoint where on the "time-line" we are.
How does the reader trying to guess the date narrow the choice of verbs?
You might be on TGGBridge jogging, and the date is irrelevant. It seems like fixing the location to TGGBridge narrows the possible verbs, but it’s still a huge list - “we were on” means you could be doing anything people can do - photographing, touring, jogging, sketching, working as a bridge repair crew or structural engineer or a vehicle breakdown van or an Uber driver or protesting or meeting up in a well known location.
In English, “I was driving..” implies a vehicle, but “a car..” doesn’t imply you were driving it, or necessarily doing anything to/with it at all.
> How does the reader trying to guess the date narrow the choice of verbs?
My thinking is when someone visualizes something they need to visualize the place/noun first and the verb (what happened over time). The first requires 3 dimensions and the latter just one more.
Both nouns and verbs are semantically one dimensional - they are each lists of words.
In a computer yes. But I don’t think that’s how the mind stores things.
This is exactly the thing that has always bothered me about PowerShell. A lot of PowerShell's effectiveness derives from the number of nouns that Microsoft has built support for, and a defining trait of any given script is usually the set of nouns it operates over, so it's weird to me that all the commands start with the verb.
I've never been able to quite put my finger on why I loath Powershell so much, but yes, I think this is part of it. I also dislike how verbose commands are (and the shortened versions are not safe to use cross-platform), and the weird Pascal-Kebab-Casing-Hybrid of idiomatic Powershell also irks me.
Still, there is something else... I just don't like it.
I could never get used to PowerShell, but in languages like Bash I /really/ like using short flags in the REPL (for speed), but long flags in scripts (for legibility). Unfortunately, this means there's a higher bar to learn and I've never seen a way to enforce this (I rarely see others use this convention). I hoped PowerShell would address this or the ISE (maybe great autocomplete negates the need for short flags?)
Disclosure, I work at MSFT and I have a special place in my heart for PowerShell.
PowerShell provides incredibly flexible syntax, enough that it can satisfy most styles of script writing.
The practice you describe (short names in CLI and long names in scripts) can be achieved. A couple of things to note about how PS binds arguments to function calls: 1. If you omit names, the parameters are bound positionally. 1. If you use names, you do not have to type the full name. It is sufficient to type an unambiguous prefix of the parameter name and the most appropriate parameter will be selected.
So for example in the CLI you can just to `ls -r -fi .csproj` whereas in a script you can use the fully self-documenting `ls -Recurse -Filter .csproj`.
Hope this helps!
Another commenter complained about short aliases not being safe cross-platform, and this is an example. In powershell core on Unix systems 'ls' is the 'ls' command, not an alias for Get-ChildItem. So I would say the correct "long" version for your example is 'Get-ChildItem -Recurse -Filter .csproj'.
In my opinion this is a non-issue (at least when comparing powershell to other shells) because the number of cases where you have to worry about bash command 'X' actually being different from your system (even across varieties of Unix-y OSes) is much greater than this handful of convenience aliases in powershell.
As I mentioned though, it's not "safe" to use the shortened aliases cross-platform - I've been bitten by that multiple times before, from memory including `ls` and `wget`.
I like your style! IMO scripts and documented examples should use long options.
You can shorten flags in powershell, fyi.
I made the switch to powershell from bash recently (after 20+ years with bash and its antecedents). I didn't love the aesthetic of powershell to begin with, but the advantages are pretty convincing.
Short aliases are "unsafe" across platforms but this is a non-issue since this situation is so much worse for bash, and at least you have long canonical names for things when it counts (i.e. writing scripts). How much bash code has been written over the years to figure out which code path to follow based on which flavor of awk you have or safeguard your function from user aliases for rm?
Thinking there is a "higher bar" to learn powershell vs bash is purely bias due to what you already know. I'm not saying everyone needs powershell and I don't think it's great for everyone but it definitely isn't harder to learn than bash. It's much more discoverable (even if only thanks to actual types and 'gm'). I haven't used ISE at all.
You can shorten the flags in PowerShell simply by writing less of the flag - as long as you write enough to be unique and unambiguous, it will work. Get-ChildItem -R will map to Get-ChildItem -Recurse because there's nothing else it could be.
But if it does clash, you get an error, so really when using this you press tab to demonstrate either that you have a uniquely short string and it autocompletes to the full name, or you press tab and find you don't have a uniquely short string and then tab through until it's the one you want and see the full name.
There are short parameter aliases, e.g. -EA for -ErrorAction which are not unique substrings, and you can define those on your own cmdlets and advanced functions, but they seem less common.
And it would be possible to enforce some of this with a style checker in your own scripts, because you can introspect cmdlets and advanced functions to see what parameters they support, and make sure your invocations always use the full parameter name. I don't know if any checkers do, but VS Code with the PowerShell extension will style check your scripts.
Same here. In theory it could be a great tool and often is but it just doesn’t feel right. I would much prefer if they had done an interpreted version of C# and added the cmdlets to that.
Yep, Powershell is actually really powerful, and even cross-platform with Powershell Core... I wonder if anyone has tried to large-scale "re-skin" it, aliasing command names to make them less verbose and more obvious?
PowerShell Core is quite nice. I actually used it on my Mac to crunch through a few image files and it worked really well compared to my failed attempts of doing the same with bash.
Totally agree. PowerShell basically has no discoverability since you have hundred or probably thousands of commands all starting with “Get”. Much better to start with noun or some kind of namespace. “ActiveDirectory-GetUser” is much better than “Get- ActiveDirectoryUser”. Just one of quite a few bad PowerShell design decisions ....
I know what you mean, but discoverability is the one area where I think Powershell is actually doing pretty well. The Get-Command cmdlet lets you search cmdlets by Module, Verb, Noun, etc, and supports all kinds of wildcards. Plus the built in help (aliased to man) is usually very detailed and provides numerous examples.
I think the bigger issue is that MS stuffs all the modules you don't need into every session, so when you want to do something simple against your DHCP server, you also have to tab past all your AD, SAN, NFS, etc, etc cmdlets in the process, which is just dumb.
you also have to tab past all
Or do Ctrl-Enter to get the list of all matches and navigate with arrow keys, should be faster.
Thinking of it, I don't really need discoverability often but if I did I'd probably resort to
Get-Command | %{$_.Name} | fzf
bound to a key or so: this gives you fuzzy matching search of all possible commands. I just tried, 1683 commands in total and this got me all Smb commands in about the amount of time it took me to type 'SMb. It doesn't get better than that. Well, except that you could have fzf split the screen and show the help for each command there.
This is also the typical PS (or other shells for that matter) story: people complaining about not being able to do discoverability whle in fact the there are actually some seriously good options but it's those options which are hard to discover :)
I don't recognise fzf as a tool, but is that better than
for fuzzy matching? Incidentally you can do To avoid having to write that {$_.} wrapping blob all the time, if you're merely expanding a property or calling a method on every object in the pipeline.fzf is https://github.com/junegunn/fzf, I prefer it over get-command smb because:
- it's fuzzy and more i.e. https://github.com/junegunn/fzf#search-syntax
- gives you a list which you can shrink further, again with fuzzy matching
- list can be navigated using keyboard, and then selected from
To avoid having to write that {$_.}
Heh, I actually didn't know that. Or forgot.
”some seriously good options but it's those options which are hard to discover :)”
That’s modern UX for you. Hide features to the degree that only a lucky few will ever find them :). A lot of websites and mobile apps have adopted that philosophy.
Yes it is really cumbersome to use. Set-<tab> is pretty useless since you have to tab through dozens of entries if you don't know the noun you are looking for.
I don’t know that Verb-Noun is great design but I’m not convinced Noun-Verb would be any better; take your example and get rid of Set- and simply press <tab> right from the start - you have exactly the same problem of dozens of entries if you don’t know the noun you are looking for, changing the order hasn’t improved that, has it?
(In reality it’s slightly less clear because tab would then have to show you other file names, executables, batch files, etc. names, at least Set- narrows it down to cmdlets to do with setting).
But as other comments have pointed out, Get-Command let's you search by module, noun, or verb. It's certainly not the most intuitive if you're used to normal linear autocomplete, but once you get used to it I find it works well.
This article has cast light upon the fact for me that this is actually what bothers me about CLIs too; they're useful once you know what can be done, but discoverability is basically "RTFM and good luck."
Isn't the manual for discovery? Or do you try to use tools w/o instructions? I don't see what the issue is here, reading docs seems like an obvious first step for anything new
GUIs often encourage starting the program, right-clicking on anything, and the menus can give you a decent educated guess at what you can do to that thing. Bonus points if hovering over a menu item pops out an additional summary banner with some more details of the meaning of that menu item.
CLIs require matching the flags to the direct objects they apply to and carrying that mapping around in your head (and the command will just fail if you try and apply the wrong flag to the wrong object).
I think the obvious first step for learning any new command is googling how to do X and clicking in the first stackexchange link.
kind of sad.
> it's weird to me that all the commands start with the verb.
It's not really weird, (verb)-(direct object)-[(indirect object)] with the subject implied in yh direction of the command is a very common structure for imperative commands in natural language. Given that PS largely targets operators that are not (and even moreso were not when PS was developed) general coders, following natural language imperative patterns makes a lot of sense.
There probably is somewhere a natural language that uses object-verb in the imperative, but o haven't heard of one.
Powershell was one of the first scripting languages I actually learned, so I'm a little more forgiving, but I think you've summed up one of the more annoying issues. It seems as Powershell has grown, Microsoft has tried to stuff every module into every session (except the ones you actually need, those you'll inevitably have to import). If Powershell didn't auto import ~30-50 modules, each with 20-200 cmdlets all sharing the same Verb-Noun structure it wouldn't be so bad...except then Windows makes it worse, by making it difficult to limit those modules yourself, and most major updates will notice that you've moved modules around and will happily undo all that work and re-add them.
Its still a nice little scripting language, and its awesome when you want to just avoid some Windows GUI nonsense. Plus Powershell's interactive nature makes learning it significantly easier than some of the more complicated languages I've picked up since then, although its getting harder and harder to excuse Microsoft's poor design choices around it.
I like Nim's approach to this issue: `f(x)` and `x.f()` are completely equivalent to one another.
https://nim-lang.org/docs/tut2.html#object-oriented-programm...
This feature is also in D. I am a fan.
Rust checking in, UFCS present
I clicked the link hoping it was about language. But it is about rogue likes and programming.
Since english can verb nouns like nominalization, i can't tell what the benefit is. one boards the board of a ship, levers a lever, and locks a lock.
We "lockpick" more often than we "save face" perhaps English is a head-first language.
In formal writing, a difference exists between nominal and verbal-style.
The article is more about user experience than English language nuances.
Take one of your examples, you're much more likely to know you can "board" once you know there's a "ship." Without the ship as context first, only people who are already familiar with ships know that boarding is even an option.
This provides a poor experience in terms of discoverability.
The article clarifies the difference is interesting in thinking about how these orderings simplify autocorrection. It's less about spoken language (I'm unaware of any research on whether there's anything analogous to IDE autocomplete in human cognition, or even what form that would take) and more about how the order in which data is presented to an interactive editor changes how useful the assumptions the editor makes about context can be.
> We "lockpick" more often than we "save face" perhaps English is a head-first language.
The head of a sentence is its verb, so no.
Relatedly, there's a vim-like editor that is all about moving from verb-noun to noun-verb.
https://github.com/mawww/kakoune
This was mentioned in the posted link
I used to prefer car.drive(), now I prefer drive(car).
I don't buy the auto-completion argument. Can my IDE really tell me all the ways I can use an Integer? I don't think so.
When it hit home for me was when Java 8 lacked Stream.takeWhile(predicate) and I decided to implement it for myself. I could implement takeWhile(predicate, stream), I could not implement stream.takeWhile(predicate).
>Can my IDE really tell me all the ways I can use an Integer? I don't think so.
Sure it can. E.g. (not an IDE, and some double underscore methods translate to operators, but the mapping is trivial):
['__abs__', '__add__', '__and__', '__class__', '__cmp__', '__coerce__', '__delattr__', '__div__', '__divmod__', '__doc__', '__float__', '__floordiv__', '__format__', '__getattribute__', '__getnewargs__', '__hash__', '__hex__', '__index__', '__init__', '__int__', '__invert__', '__long__', '__lshift__', '__mod__', '__mul__', '__neg__', '__new__', '__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__trunc__', '__xor__', 'bit_length', 'conjugate', 'denominator', 'imag', 'numerator', 'real']where's .isPrime?
Python doesn't have such a function on the int type, so it will naturally wont show it.
The IDE (or, here the "dir" method) shows everything _available_ to use as a method on an integer.
If you meant "but can it show functions that are not on the integer type but that accept integers?" -- then many IDEs cannot, but it's not a difficult feature to add in an IDE for known libs/modules.
Can't think of any other way a question like "Can my IDE really tell me all the ways I can use an Integer?" makes sense. Of course the IDE wont know about yet unwritten methods which still count as "ways to use an integer". But nobody claimed it can above, and the absence of that doesn't discard autocompletion as an argument (to using the noun first).
Some languages (eg. Swift, Rust, Kotlin; Dart is working on it) allow you to implement "extension methods" separately from the actual type definitions, even when the type in question is from another module/library. It's as if they're codifying the view that "OOP" is all a matter of syntactic convenience, and can't help us with the really hard problems.
Protocols (the Swift name) are an attempt to give greater support to an OOP language in solving the Expression Problem [0] more generally. OOP does great for certain classes of problems all by itself, but it is not enough for everything. Similarly, FP is great for certain classes of problems, but is not enough for everything.
I think protocols are neat, but I'm uncertain whether their usefulness outweighs the added mental burden of keeping track of them all (since they can be defined anywhere and take effect everywhere).
[0] https://homepages.inf.ed.ac.uk/wadler/papers/expression/expr...
Java is the exception that proves the rule. In C# (and many other languages) you can easily extend the class with ".takeWhile".
That's what I thought before I tried it.
Also, the autocomplete argument isn't a big argument, as writing is the less time consuming task in the process.
Autocomplete is one thing. Getting a list of suggestions is more important, because you might not know the name of the function.
I agree, I guess that noum.verb() is indeed better to find those
That's not a big argument, as a process doesn't have to be "the most" time consuming part of programming to have a big negative impact.
Annoying context switches and slowdowns to lookup function naming and possible options (without autocomplete) can still interfere with the flow of a programmer as he writes a program...
Knowing what you can and can't do with an object is the time consuming part. Once you know that answer, the writing is relatively insignificant.
To test that this argument doesn't work, consider introducing a 400 ms delay on your keystrokes. Writing will still be the least time consuming task. Then compare with code you write without the delay. I have a reasonable hypothesis how the test will turn out.
The autocomplete search space for drive(car) can also be narrowed down by how you do your imports:
FYI, in C# they have "extension methods" which give the syntax stream.takeWhile(predicate). Like a breath of fresh air.
I wonder if the ergonomics of text based adventure in SOV-order languages is/was substantially improved for this reason.
北に行く (north, go) ポーションを飲む (potion, drink) スライムを戦う (slime, fight).
There's no reason you couldn't type `noun.` and get autocomplete to `verb(noun)`
Or just support calling "verb(noun)" as "noun.verb()" all the way through: https://en.wikipedia.org/wiki/Uniform_Function_Call_Syntax
At Onshape we're considering doing this for our language FeatureScript[0], as discussed here[1]
This would make the call e.g. v1->cross(v2) syntactic sugar for the call cross(v1, v2).
The main reason it's not just an obvious win for autocomplete and readability is that it provides for multiple ways to do one thing, and is one more thing you need to know when learning this new language. Rather than having some code call plane->yAxis() and other code call yAxis(plane), it might be nicer to have only one way.
The other counterargument: the OOP-like syntax could support the expectation that polymorphism works like e.g. Java, where e.g. v1.cross(v2) would only considers the type of v1 when deciding which overload to call. FeatureScript has multiple dispatch and will consider the types of all arguments when choosing an overload.
All that said, the change it seems like a win overall, but we're waiting for the language to mature more to see if clearer arguments emerge.
[0] https://cad.onshape.com/FsDoc/
[1] https://forum.onshape.com/discussion/comment/33495/#Comment_...
> Rather than having some code call plane->yAxis() and other code call yAxis(plane), it might be nicer to have only one way.
One option is a linter/formatter that can warn and configurably push you in one way or the other.
Hang on. Is no-one else here just blown away by the fact that the guy lucked into meeting Garriot & Watson? Man, I'd have given my right arm for that as a kid.
Yeah that part stuck out to me too. Lucky guy!
See also Steve Yegge’s Execution in the Kingdom of Nouns (2006):
https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...
The part of the article talking about dropping to one verb ("use") made me think of the same thing ("execute").
The benefit to verb/noun (or noun.verb) is getting BOTH. If you effectively drop the verb, you lose out on half the communication.
To use another comment, you may not want to quaff a ladder, but it is a bad idea to assume you just "ladder", and if assume too narrow a field it leads to frustrations. (In programming or in games. Back when I looked into text adventure games (interactive fiction) there was a truism that "use" was a _bad_ verb)
If you ARE wide enough....then you have little difference between noun verb and verb noun except for the effort required.
Honestly, I was more interested in the first part of the article, that talked about checking the state if the world rather than a sequence if steps.
The flip side of this, that they don't mention, is that this asymmetry exists only if you design your interfaces in FP / OO languages differently. Most OO designs I've seen have a large number of classes, each with very specific methods. Most FP designs I've seen have few (or even zero) custom types, and a large number of methods can apply to a large number of types.
As Alan Perlis said back in 1982:
> "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."
When I'm writing in a FP language, it's true I don't get much autocomplete help, but the upside is that I've got 100 core functions that operate on almost anything. Even if you flipped around the order of the syntax on screen, autocomplete wouldn't help much, because you can apply any verb to any noun. I rarely have to convert between types, and when I do it's a couple words at most.
When I'm writing in an OO language, with typical class libraries and frameworks, I get a lot of help from autocomplete, because once you know the "type", there's only a few things you can do with it. The corollary is that I usually spend half my program converting from the type I have to the type that I need.
(Think about English. We could simplify the language by eliminating words like "eat", and just say "I'm going to go use lunch", but then we wouldn't have the word "eat" available for any other nouns, even where it'd be helpful and clear.)
> It is better for programmers if they can choose from two medium length lists than to have to choose from a very long list (where a lot has to be typed before it's useful) and then a very short list (where not much is gained).
This doesn't tell the whole story. As they say, you write code once, but read it many times. I'm perfectly OK with giving up autocomplete (for writing) if it means I don't have to spend twice as many lines of code (for writing and reading) to convert an X to a Y just so I can call f() on it.
The more generic the functions, the less code I have to write, and having less code has huge benefits across the board -- for writing, reading, debugging, testing, performance, and so on. Autocomplete can be nice, but it's not nice enough to want to sacrifice everything else.
Note that not everyone FP lang is verb-noun. Sure, most are, but your F#/Elixir/Scala's are more often than not written in a noun-verb form with the `|>` operator or something vaguely equivalent (syntax extensions in Scala's case).
I think what the topic gets towards is that if you view all possible actions as a multi dimensional array of (all nouns)*(all verbs), it makes sense to first choose the one of (nouns, verbs) that is initially the smaller set
I find this whole conversation interesting as someone who is working on a text adventure (shameless plug: https://danger.world).
I personally chose verb-noun, but it would be very simple to flip it around.
Is this really verb-noun/noun-verb or verb-object/object-verb?
verb-object / object-verb is more specific in the context of this story, yes. And indeed, when you introduce indirect objects or other fanciness, additional dimensions of complexity can be introduced (GUIs, for instance, can have ambiguity in terms of whether object-verb-indirect_object is represente by click-command-click or modclick-modclick-command).