Actually there are relatively few real (TM) open source projects driven by the community, at least if you look at important projects. Many open source projects are just commercial projects driven mainly by a single company. Look for example at Redis, MongoDB, MySQL, and Elasticsearch. They follow exactly the model described in the article. Technologies like these could have been developed by a community, too, but it is hard to form such a community and keep it alive.
For a community-driven project from the size of a database some serious sponsors would be needed. Good examples are Rust, Linux, and PostgreSQL. I wonder why so many companies are happily paying Oracle (and the likes) tons of money instead of sponsoring an open source project like PostgreSQL.
I disagree about Redis. Redis Labs did a great job at creating an ecosystem of advanced things around Redis. But the Redis core project itself, that is what most people use, is surely sponsored for a big part by Redis Labs, but is executed as a completely OSS community driven project:
1. All the work is done in public, with a license that gives zero protection to the original authors.
2. The project leder (myself) only does OSS work and has no other roles in the sponsoring company.
3. The roadmap is decided by the community (myself with feedbacks from the community), not a company or some product manager or alike.
4. There are multiple people from multiple companies contributing regularly code to Redis: Redis Labs, Alibaba, AWS, ...
5. The main web site is handled by the community.
In the case of Redis this was possible because of the minimality of the project, otherwise I agree that's a huge challenge. But still IMHO Redis deserves to be listed in such "purely community" OSS projects.
There is a slight difference in that all those tools are the basis for their companies' commercial offerings. Google does not sell Go tools or consultancy hours etc. Its interest in Go is to have a programming language that's safe, fast, operationally undemanding and fits the mental model of a recent CS graduate (as per that notorious Rob Pike quote).
This at least makes it easy to work out if incentives are aligned. Do I want to program in such a language? Yes? Then Google is probably not going to completely screw it up, even when they make decisions I disagree with. Do I like to wax rhapsodic about parser combinators? Not ever going to be a good fit.
Actually Mongo doesn't even look at pull requests. So, yes.
Postgres (do we really still need the "QL" reminder?) has demonstrated staying power, and repays invested time in spades. Linux, obvs. Rust, it's still too early to be sure about. Learning it will be at least educational, maybe formative, and at worst it won't be taken away.
Projects like Linux are "community" projects in the sense that there is not a single company backing it, sure. However, for Linux, only 7.7% of contributions were made by unpaid developers as of this 2016 report[0].
I have a hard time calling this a "community" project when the community is paid by corporations. Don't get me wrong, I'm not really complaining, as a project the size of a commercially viable operating system kernel is hardly a project that could be viable without corporate backing IMHO.
Don't forget that most of the kernel's codebase are drivers and some of the most impactful additions were made by employees of corporations (on top of my mind - cgroups and namespaces by googlers, and not only as a base for containerization).
Been learning Rust for the past few weeks.. absolutely love most of it. It's the first low-level language that actually feels right to me. Of course, I'm one of those weirdos who actually loves JavaScript. I also appreciate Go and C#. Rust just feels like most of the right trade offs, and once Futures and Async/Await stabilize, it'll be IMHO feature complete.
As an aside: The community needs an abstraction for WebAssembly's File System that has something like fcntl/flock functionality. As it stand's the Node interface for emscripten targets isn't good enough. Would love to see a bit more collaboration on this regard for sync and async interfaces for FS I/O that supports file/record locking in a more abstract runtime. Though this has been a pretty big shortcoming with Node since early on imho.
If we look at the Top 10 programming languages, I think only Ruby is community driven that has very little cooperate backing. All the other languages are driven by one (Apple;Swift), or multiple, (Javascript, Java) cooperate sponsors.
I think the balance is hard, you will need lots of resources from Documentation, VM expertise, Library, etc.
The top 10 according to Red Monk[1] are JavaScript, Java, Python, PHP, C#, C++, CSS, Ruby, C, Objective-C, Swift, TypeScript, Scala, Shell, Go, R, PowerShell, Perl, Haskell, Kotlin.
Of these, Javascript (w3), Python (Python Foundation), CSS (w3), C++ (ISO), Ruby (community), C (ISO), Shell (Posix), R (community), Perl (community), and Haskell (community) are not bound to a single company.
I should have included in my first post. I think rather than splitting between Community or Cooperate, a more accurate representation would be how much Market Value are depending on those Languages. If you have a company dependent on OCaml, let say it is on 30th on the list, but the company is making billions in Net profits, you could bet the company will fund the development of Ocaml, even if it was driven by a Foundation, or Community.
> The top 10 according to Red Monk[1] are JavaScript, Java, Python, PHP, C#, C++, CSS, Ruby, C, Objective-C, Swift, TypeScript, Scala, Shell, Go, R, PowerShell, Perl, Haskell, Kotlin.
Companies pay Oracle because 1) PostgreSQL doesn't take you to lunch and 2) Oracle "just works." I mean, we all know #2 is effectively a lie but having the option to go to Oracle directly for issues (even if the response is just "pay a contractor") immensely reduces risk for executive leaders.
Last time I was in a position to try, not so much. I emailed about 15 such contractors at least close to the SW US (I'm in Phoenix), and didn't get a single response. This was 5-6 years ago.
By my count, fewer than half of Rust's core team [1] are Mozilla employees, whereas the number is 100% for Go. Rust also has a very clear and open governance model [2], where the Go team is more closer to "we'll do what we want".
The governance stuff is interesting; are there any programming languages that are structured in between "community driven" and corporate oligarchy? ie. non-profit with dedicated Finance/HR teams?
Rust has largely broken out of Mozilla already. If it can maintain its growth rate (not guaranteed, but so far so good) it will have a future. Check back in five years.
There are garbage collection libraries in progress: rust-gc[1] and shifgrethor[2]. If you mean language support, then no, not really, though it was discussed in https://github.com/rust-lang/rfcs/issues/415
I feel like not having a GC is rust’s biggest advantage over higher level languages. It allows rust to work in environments that only languages like C, C++ can work in. (Embedded, shared libraries, etc)
I more meant "GC for those who want it", as advanced GC with Edens and mark-sweep tenured generations can handle high-allocation and large-heap scenarios that are extremely challenging for simpler MM approaches. I don't know if this really can be done as a library though.
I had a similar question, but rather, what's the difference between Go with Google deciding what goes in, and Python (a year ago) where the BDFL and a couple core devs decide what go in?
Sure you could argue that a company may have different incentives than a BDFL, but in this context, it's not clear that Go would've been more likely to accept the change you're proposing if they weren't being led by Google.
I guess the most important distinction is the “B” part (benevolent). Guido is called that because he is known (since before BDFL is a thing) to listen to other people, and adapt when they disagree strongly with his decisions. Google has never demonstrated the same attitude afaict, and in multiple occasions showed exactly the opposite.
Edit: And to answer the question, no, there’s no philosophical differences. And there’s nothing wrong with that. Python never called itself a community’s language (there are instances core devs said in no ambiguous terms that it is Guido’s project). The problem only arises if a project gives an impression that it is owned by the community when it actually isn’t.
Given that the go core team has committed to adding generics I don't think it's fair to say they don't listen to critique from the community. Is there a more contentious issue in go?!
I wonder if they'd ever consider adding hygienic macros instead of templates/generics. To me, that solves the problem of needing to write the same code for float32+float64 and so on. And it doesn't require so much careful thought about the type system, accidentally creating an awkward metalanguage like happened in C++.
Rust people are very glad that Rust brought high-kinded types to replace the most common uses of hygienic macros. Using macros for everything is a nightmare.
About using them as generics, how do you enforce constraints? Unconstrained generics won't lead you far (or better, will lead you far into JS's Wat territory).
IMO, Rust has a very nice macro system, and this is a good thing (although I'm not familiar with any changes they've made in the last year or two). It's a bad thing that sometimes one needs to resort to the macro system because other features in Rust don't play well together, but that's a long topic with no solution in sight.
> About using them as generics, how do you enforce constraints?
I'm really thinking about the case for numerical algorithms where the exact same code works for different types (say float32 and float64 in Go). It sucks to copy and paste hundreds of lines of code and it sucks to have a separate code generator write your file for you (essentially an external macro processor). Imagine something like the C preprocessor for Go, but without the well known flaws of the C preprocessor:
#define IMPLEMENT_FOOBAR(NAME, TYPE) \
void NAME(TYPE* ptr, long len) { \
hundred lines of implementation here \
}
IMPLEMENT_FOOBAR(foobar32, float32)
IMPLEMENT_FOOBAR(foobar64, float64)
That parametric polymorphic enough for a lot of use cases, and this could work with data structures too.
As for constraints, you can pass function names as arguments too. For instance:
IMPLEMENT_SORT(sort_baz, baz, bazcompare)
Other proposed features in go, such as the "check" statement for error handling, are probably implementable as a nice macro if you had a good macro language. This means the core language wouldn't have to grow, and features like "check" could be imported from a library. Including features like this in a library means the core language isn't bound by backwards compatibility when a better idea comes along. Old code used the old library, new code uses the new one, and the language stays clean and compatible with both.
I guess you missed the part above where I mentioned C++ templates as a cautionary tale about accidentally creating an awkward metalanguage. Besides, Go and Rust are both clearly responses to C++, so it makes sense they would try to provide (reinvent) similar capabilities while avoiding the flaws.
Guido always took the role of gatekeeper and arbiter (I'll consult with the community and decide what submissions go in) rather than a true dictator (I'll decide what goes in... submissions? What submissions?).
Prometheus is a (rare) recent example of a significant, thriving open-source project that is community-based. Not quite as broad in scope as something like Elasticsearch though.
Depending on what "community-driven" means, I'd like to add Python, Blender, Jupyter, Django ...
I'd say "thriving community-based open source projects" are rare in the sense that most open source projects don't thrive (especially if you count every open-licensed repository on github), but there are tons of examples.
Apache and Linux have always been the two big examples of community-driven open source projects. And Apache still hosts many community-driven open source projects, but they don't seem to be as visible as they were 10-15 years ago.
Some Apache projects, sure. But Spark is very much a Databricks project. Hadoop used to be fairly community as there were hands in it from Cloudera, Hortonworks, Microsoft, and a few others. But with the merger of Cloudera and Hortonworks, the expectation is that it will become guided by that organization.
I'd use this as an argument that if PHP is the gold standard for a community-driven programming language, I'd rather every popular language be backed by a large corp haha.
Actually, even ignoring PHP, I'm vaguely convinced it's generally better for a language to be backed by a company. I personally feel more secure knowing that there are people whose full-time job is to take care of the language, and I trust community backlash to deal with any errant decisions. I can't imagine Google (or Microsoft, or Apple, or Facebook) making or blocking a change in a way that kills an entire programming language while they sit idly by ignoring the community response.
Purists will agree with you. Pragmatists perhaps not so much.
I'm sad to see this opinion every time somebody mentions PHP. PHP may have its flaws. But how can anyone deny the instrumental role PHP has played in building the web as we know it today?
The open web, open source, open standards, agile were supposed to be the free market answer to all the flaws of the big old corporations. And for a large part they have succeeded in creating the new and better world we enjoy today. But it looks like we're experiencing some regression. The new big corporations of the web are becoming more like the big corporations of the past. Corporate structures, HR departments, Shareholder meetings, Management layers, system thinking are again replacing the success of uncertain organic growth models that involve chaos, diversity, agility, attitude and pragmatism. To see media outlets advertising social media handles as opposed to their web addresses. Purists are on the rise. Perhaps because everyone is seeking those comfortable high paying jobs at the big corporations?
PHP has fought and won its battles. Outfoxing many big corporate opponents along the way. It should get the respect it deserves. And if you ever find yourself at the front-lines of a new battle. PHP might still very well be your most effective weapon of choice. A worthy consideration at least.
Is modern PHP really that bad? I kind of think it makes a lot of sense in a world where almost everything, even major government IT systems, are basically java/.net on the backend and some JavaScript MVVM framework on the client.
I know a guy who works with laravel, and we’ve teased him so much about PHP over the years, because PHP. The truth is, that his backend is actually more suited for the modern world than what we currently run. .Net core is getting to where it’s competitive, but it’s still a bitch to build something like a mixed asp mvc and Vue components app, which is truly effective/productive alternative to MVVM clients or jquery/Ajax for smaller projects.
The thing I "envy" about PHP guys is that they aren't on proglang forums; they are somewhere happier making money and clients happy, and not giving a f* about the things our niche here does...
I started doing PHP dev during the 5.2, 5.3 times and since 7, it feels much better to program in. It even has RoR-like frameworks like Laravel. So, use Symfony or Laravel if you need to use PHP.
It has some historical baggage that some people can't get over. Nothing, no programming language will be perfect. If I had to start all over, I'd probably use something else but it works great for what I'm doing. (web dev with Laravel)
Visual Basic was basically(!) killed by Microsoft albeit large protest, so no, putting your trust in one company does not always work.
PHP has been developed very well the past few years as community project, there is an open RFC process with a codified voting process in place. Sure there have been some drama within the community, but that has not affected either the language nor the implementation.
What makes PHP a great fit for a community driven project is that PHP is a pragmatic language in its core. If the general goal of a project is to design a perfect & consistent language then a community driven process is maybe not the best approach.
Let's hope we get some form of generics because in my recent use of it, I can see the usefulness of type information for parameters and return types but not the complete inability to say that this function will return an array of a certain type; I can only say that it'll return an array (which might have anything and everything in it), which is useless.
VB is de facto dead, declared legacy in 2008 & latest major release was in 1998. VB.NET is a new incompatible language, everyone invested in VB needed to retrain skills & rewrite existing software.
It was a huge uproar in the community, but MS didn't care enough & this goes against TS idea that the community can stop a major company doing this sort of thing. It can happen & it will happen again.
However that does not mean that you shouldn't invest resources in a corporate language, it just a false argument to think it can't die.
Visual Basic and VB.NET are two very different beasts, only sharing a name.
The VB6 runtime is indeed updated to run, but really how much update does it need beyond sticking the DLLs around? The VB4 and VB5 runtimes run perfectly fine on Win10 and those had no updates for it. It all relies on Win10's overall backwards compatibility.
They are similar only on a superficial level, at their core they have not only vast differences in terms of how they work but they also have a different philosophy. It is not a matter of what you can do in terms of features, but also how you do it and how that affects the IDE, which is a core element of VB6 as opposed to VB.NET where you could be using whatever text editor or IDE you want (it isn't a coincidence that VB6 has its own IDE whereas VB.NET is using the same IDE as C# and C++).
VB6 isn't just the language, if anything the language is a minor part of it. VB6 is the entire tool, including the language, IDE and library. You cannot separate these, they are written for each other. This is also what pretty much all VB6 alternatives get wrong as they try to reuse elements designed for something else. You cannot do that and get something like VB6 beyond at a superficial level (and this is exactly what Microsoft did with VB.NET).
Doesn't surprise me, you are not alone in this as others who have used classic VB do not really see the difference between classic VB and VB.NET and to some extent not even (most of) Microsoft seems to get what makes classic VB special :-P. However i'm not sure how i could describe it better than the last paragraph in my reply above as to me the difference between the two is day and night.
It is kinda similar to not understanding how something like a jury rigged Vim with a GDB, file list manager and ctags parser differs from a real IDE even after having used the latter. Usually i'd say "you need to experience the real thing" but if you do not get it after you do, then i'm at a loss of words.
Having observed the decision-making process for 6 months and thought about PHP's direction, I'd agree.
Either a foundation is needed, which sets a vision and sticks to it, or a company, or a BDFL. But with none of them, you get each language faction pulling in its own direction, bits of each added/retained, and no coherence.
LLVM is also an interesting examples: it was first driven by Apple but now it's driven by many different large companies. I mean it's not completely driven by community but i think it's better than dominated by single authority
> I wonder why so many companies are happily paying Oracle [...] instead of sponsoring an open source project like PostgreSQL.
Corporate world is ruled by liability. When you are working on a multi-million dollar project and the database breaks, you want to be able to put as much responsibility on the 3-rd party as much as possible.
Open Source projects come with no guarantee. If someone hacks into a system because of a flaw in the Open Source project you are screwed. If it's a software delivered by Oracle, then Oracle is responsible and needs to pay your million dollar fine.
Of course I'm simplifying, because there's probably still a lot of legal process behind it depending on the country, but it's essentially that.
> * is responsible and needs to pay your million dollar fine.
This is not really true. AFAIK there is no commercial software that has a license clause making the manufacturer liable if you loose your data due to faults in their product.
We lost a couple of TB due to bad OS/firmware in a NAS. The manufacturer did everything they knew trying to fix it, but in the end failed and we lost the data.
Once we lost a crucial virtual machine and backups due to human error. It was not the manufacturer of the virtualization solution that helped us get it back, because they did not have such tools and did not provide such services. Hackers who were reverse engineering the software and publishing their reversed code on the net helped us get the VM image back.
After these two and some other situations I am even stronger proponent for open source, because with open source you have the option to try help yourself if the manufacturer won't or can't help you. I am not just proponent, in fact if I was to be involved in decision making, it would be open-source solution throughout. Of course with support. If possible I'd pay for support services to the original developers.
My thought was a bit deeper than that. Actually I wonder why are not more companies sponsoring PostgreSQL to get the features they need instead of paying Oracle?
As an "enterprise" Oracle user, I'd give all those up in a heartbeat if Oracle would just support proper indexing on Json values (well CLOBS with a "is json" constraint in Oracle - bleh). Meaning that I wouldn't have to execute ddl to create a function index on each new field path within the json that I want to support. IOW I want a Postgres path ops GIN index which work beautifully.
I don't have the stats to back this up, bit that all seems very niche. I have to imagine you get downvoted because people don't recognize these as real requirements. I needed not a single one of these. Ever.
Enterprise software tends to have lots of features that the majority of folks will never encounter in their career. The enterprise market space is composed of around a thousand potential customers world wide, all of whom are large enough to have sophisticated and complex internal computing environments. They have varying norms, requirements, workloads, regulatory environments, industry standards and so on and so forth.
Each has a lot of money and rejecting the requirements of one company often means you are effectively rejecting several, or perhaps even an entire sector. So you add something to cover them and before long, your software has the union set of features required solely by enormous companies.
From a non-enterprise view a particular feature may seem like absurd overkill. But someone, somewhere, needs it and there is a long and often impressive story of how it was achieved.
Compiling SQL -> native I'm not sure deserves to be described as niche, since lots of analytical queries can benefit from that. Worth pointing out that Postgres has a JIT compiler [0].
Like automatically refreshing materialized views, by which I mean when a base table is changed, the engine uses the definition of the materialized view to run triggers to update materialized view.
My entire point is that I shouldn't have to write the triggers myself. PostgreSQL figures out the triggers from my definition of the materialized view.
I can think of environments where you'd get fired. Oracle these days screams "legacy", and especially if you're working on low-latency projects like trading systems it would be a huge, expensive mistake.
Many like Go because it is an opinionated language. I'm not sure that a 'community' run language will create something like that because there are too many opinions. Many claim to represent the community, but not the community that doesn't share their opinion. Without clear leaders I fear technical direction and taste will be about politics which seems more uncertain/risky.
I like that there is a tight cohesive group in control over Go and that they are largely the original designers. I might be more interested in alternative government structures and Google having too much control only if those original authors all stepped down.
My thoughts exactly! It's important to have a community and to work with it, but, especially for a programming language, there has to be a clear concept of which features should be implemented and which not - just accepting community contributions for the sake of making the community feel good would be the wrong way. Otherwise you end up with a feature monster like innumerable other programming languages, and that's exactly what Go doesn't want to be.
Parametric polymorphism is a real world issue. The fact that it wasn't checked off indicates to me that they are of the opinion that it's not as important.
It's ok to have this opinion, I just disagree with it. I do agree that it's important to think it through, I do not agree that the language should be brought to 1.0 without it.
This is the general point though. Community pressure would have produced generics under a different leadership model that wants to check the box.
Whether this is the right decision for this specific issue is a different story. But the fact that they aren't checking boxes for the sake of it is evidence of this style WAI.
...what in the world makes you thing that? I love parametric polymorphism, but nothing stops you from writing the most complex computer programs you can imagine without it.
Go literally special-cases generic builtins because they couldn't be arsed to properly implement it but knew nobody would accept completely untyped core collections.
Go does not have generic arrays any more than C does. You cannot e.g. write a generic Go function to reverse an array.
You seem to be conflating type-parameterized collections with generics. You can use generics to implement type-parameterized collections, but it doesn't really make sense to think of type-parameterized collections as a form of generics unless you can actually abstract over the type parameters (which you can't in Go).
> Go does not have generic arrays any more than C does.
Go does have generic collections, and generic functions operating on these collections.
> You cannot e.g. write a generic Go function to reverse an array.
You can if you're part of the core team and implement them as builtins. Go doesn't have userland generics, because users of Go are peons who can't be trusted with sharp implements.
> which you can't in Go
Because the core Go team assumes and asserts users of Go should not and can not be trusted with anything more complex than a wooden cube with rounded edges large enough that it can't fit in their mouths.
Would you please drop the nasty rhetoric and not do programming language flamewars on HN? They lead to harsher, dumber discussion, and ultimately cause a forum to destroy itself. Learning from those past lessons was one of the motivations for starting Hacker News years ago, and it's not like we want to forget them now.
Edit: we've had to ask you half a dozen times in the past not to post aggressively and uncivilly on HN. Would you please fix this? Reviewing the site guidelines should make it clear what we're looking for instead.
According to this usage, C has generic functions because you can, e.g., index into an array of ints. The whole point of generics is that the language makes available some form of abstraction over types. Go does not have this feature. It has some built-in operators that can operate on multiple types. Most languages have this. The arithmetic operators in C, for example, can take operands of many different types. This is not "generics".
>Because the core Go team assumes and asserts users of Go should not and can not be trusted with anything more complex than a wooden cube with rounded edges large enough that it can't fit in their mouths.
This can't really be the right explanation given that generics are now being added to the language. In any case, this kind of mean-spirited speculation about people's motives is borderline trolling, IMO. The Go team have publicly stated their reasons for not (initially) putting generics in the language. Unless you have some inside info suggesting that they're lying, I'd refrain from saying this kind of thing.
> The whole point of generics is that the language makes available some form of abstraction over types. Go does not have this feature.
Go absolutely has this feature. Just not for you as a user of the language.
> It has some built-in operators that can operate on multiple types.
Go has multiple builtin functions which abstract over type parameters e.g. close() can operate on any chan, append() or copy() can operate on any slice. They are not operators (symbolic or keyword).
> Most languages have this. The arithmetic operators in C, for example, can take operands of many different types. This is not "generics".
It's overloading, +(int, int) and +(double, double) are separate functions under the same symbol. It's an orthogonal feature, so much so that there are plenty of languages which do have userland generics and don't have overloading.
> This can't really be the right explanation given that generics are now being added to the language.
The core team has been saying they're considering the feature / issue pretty much since the language was first released. You'll have to excuse me if I don't hold my breath.
>Go absolutely has this feature. Just not for you as a user of the language.
That seems like a roundabout way of saying that it doesn't have the feature.
>It's overloading
You could just as well regard Go's close, append, copy etc. as overloaded.
> The core team has been saying they're considering the feature / issue pretty much since the language was first released. You'll have to excuse me if I don't hold my breath.
They've come out with a specific generics proposal. I'm not sure why you would think that they're not serious about implementing it.
> You can if you're part of the core team and implement them as builtins. Go doesn't have userland generics, because users of Go are peons who can't be trusted with sharp implements.
It looks like you are in golang core team's minds. You appear to be able to judge intent. Impressive quality you have here.
> It looks like you are in golang core team's minds. You appear to be able to judge intent.
Not at all, Rob Pike stated it fairly explicitly if in somewhat milder terms:
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
But that is exactly the point - you would have added the best proposal available to the language vs. waiting till all issues are worked out with generics. Go 1.0 has been released 7 years ago, in that time a ton of happy Go users were able to use Go as it is. All the discussions on generics clearly show, that the Go team is interested adding them, but that all proposals so far had distinct shortcomings.
Personally, I am very happy with what Go offers today, I would rather keep a small and simple language, which allows me to concentrate on doing work, rather than trying to keep up with features added to the language. I am not even sure, I want generics to be added to the language, until they come up with a really great concept which maintains the simplicity of the language.
ML (1973) and CLU (1975) introduced generics to the world, followed by a myriad of approaches to implement them across multiple languages, so plenty of time.
No one claimed, that languages with generics don't exist. But if you follow the discussion about generics in Go, Russ Cox did a thorough discussion of all the proposals on the table and why they would mean giving up some of the core traits of the Go language. As soon anyone suggests an implementation not colliding with the core Go goals, the Go team probably would pick it up quickly.
Or phrased it in another way: with all the years of experience on generics, they still got them wrong when implementing generics for Java, e.g. with the type erasure.
Not at all, he focused on how Java and C++ do it, and later on the Go 2.0 proposal admited that they were a bit closed minded to look for how other language implementations work.
"In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."
I'm not sure what GP meant by real world issues, but they do have some specific constraints that they want their implementation of generics to achieve, one of the core team members tried to nail down the details over a period of years but there were issues with the previous proposals before the latest one. So I don't think the issue is parametric polymorphism or not, it's more about things like keeping fast compile times, avoiding boxing value types, etc.
The one thing that I do like about Go not having generics early on is that it became a language for lower-level work with many good libraries. I wouldn't quite call it a systems language like they like to say but I like its positioning which filled a void.
Now when Go 2 gets generics it will be used for all kinds of applications and frameworks, which is fine but isn't really filling a void. I expect the ergonomics (e.g. wicked fast compiles to single binary) to be better than current app dev langs. I stopped using Go and look foward to trying it again. Even then I don't think it will be one of my favorites and likely used for smaller work. Long-term I want Pony to succeed. Medium term I'll take Kotlin, Elixir, or Dart.
Uses of https://golang.org/pkg/sync/#Map are objectively harder to read than if they allowed a library to implement a type that conforms to builtin map[K]V. Instead you have to learn method names and cast return values, and builtin maps don't even support those methods.
Is there any example of a successful large open source project that's truly lead by the community, and not a handful of people that decide what goes in and what doesn't? What would such a model even look like?
I think the key component of such a model is a clearly communicated method where the community can participate in the project direction, and evidence that the method is being respected. It's been brought up in this thread but Rust's governance/RFC process comes to mind. https://www.rust-lang.org/governance
In theory yes, in practice though, there's a lot of projects where the lead maintainer has vanished or doesn't spend a lot of time on it anymore making the future of the whole project and its forks debatable. Most forks aren't interested in taking the lead. The only exception I can think of was the nodejs fork, IO.js, which ended up being more of a political move and a kick up the arse for the Node team. A similar kick was given to the NPM team when Yarn started making waves.
The Rails/Murb drama a few years ago is another good example I think. A bunch of people decided they didn’t like the directions Rails was going in, created a competing framework, and after proving the concept was sound ended up introducing a lot of the unique features back into what became Rails 3.
Not sure - I thought it got significantly better with C++11/14?
Admittedly it also became rather more complicated, but the changes were generally for the best I thought?
From my five years of learning and using C++, I still have no clear picture how move semantics and rvalue references work. (I “kinda” get it, but am not confident about it). It seemed more and more convoluted every time I tried to study about it. The complexity created by implicit and explicit copy/move constructors is just insanity for me...
> From my five years of learning and using C++, I still have no clear picture how move semantics and rvalue references work. ... The complexity created by implicit and explicit copy/move constructors is just insanity for me...
This is the best argument for move from C++ to Rust instead. No "move constructors" whatsoever, move semantics are the default and are always performed via a trivial memcpy. There are explicit .clone() operations, like in some other languages, or the move construct can implicitly copy when the type is a POD-equivalent that makes this possible (identified by the Copy "trait"). Very simple, and nothing like the whole C++ move mess.
If you mean intrusive data structures, Rust just doesn't support them. Everyone still manages to write software in Rust just fine. (There's some early support for immovable data with `Pin`, but that's only exposed via async/await at the moment.)
Move constructors are one of the worst parts of C++ and being able to write movable, intrusive data structures is absolutely not worth the cost. If you do need one, Rust shows that it's better to sacrifice movability than introduce move constructors.
This is not uncommon -- there are some things you can do in C++ that you simply cannot do in Rust (either safely, or at all). Usually there's another, better way to achieve the same overall goals.
> Move constructors are one of the worst parts of C++
Move constructors fix a narrow problem, which is, when you have something like a vector append, how do you copy over all the previous elements as a "shallow" copy rather than a deep one?
In C with realloc(), it's just assumed that memcpy works for that. With C++03 and earlier copying the elements could very well end up duplicating everything on the heap for no reason, then discarding the old copy.
How does rust do this? What I am reading from googling is that every assignment is a move?
Yes, every assignment of a non-Copy type is a move. (Copy types are basically primitive ones like ints, so not vectors etc.) The compiler prevents you from using the old variable at that point.
Since Rust always enforces that a move is a memcpy, a vector reallocation is just a realloc() like in C.
The convention in the Go libraries is that even when a package uses panic internally, its external API still presents explicit error return values. -- https://blog.golang.org/defer-panic-and-recover
I belive this means that using this mechanism similarly to try/except in Python is not possible (I write 20 lines in go so this is quite a wild guess)
It's a dangerous type of articles that deliberately turns community against Go team, based on misunderstanding or plain false accusations.
Go team said many times that generics are technical issue, not a political one. (see [1] by rsc (Russ Cox from Go team))
There are also stories like experience report of Cloudflare outage due to the lack of monotonic clock support in Go that led to introducing one. [2]
The way how Go team handles potentially tectonic changes in language is also exemplary – very well communicated ideas, means to provide feedback, and clear explanation of how process works. [3]
Plus, many people in Go community do not want generics in Go (at least, in a way they're implemented in other languages). Their opinion also matters.
In my opinion the monotonic clock example cuts against this argument - originally the core team was extremely dismissive of the idea, even though it had been shown to cause pain for a lot of people: https://github.com/golang/go/issues/12914#issuecomment-15075...
It's a great example, as it shows the complexity of the discussed topic and attempt to justify any changes to the language/stdlib. In the above-mentioned comment rsc replies to another Googler, not to "community".
So we have
a) Google having problems with lack of monotonic clock in Go
b) Go team reluctant to break API and break promise of
compatibility without really serious reason
c) community feedback in form of well written experience
report, explaining how serious the issue is for the community
d) immediate reaction and efforts to find a solution (without breaking API)
Even though team was dismissive of the idea, feedback from the community made them to change their minds.
Or, you can view this from another angle, that the golang authors yet again disregarded previous established work in the industry for the sake of avoiding hard work in the language and the compiler. It's no wonder the time package in golang is garbage compared to established offerings in mature languages like Java and C#.
It's no wonder that a new language is lacking the maturity and features of an old language. Of course you can attribute it to 'avoiding the hard work', but that's the same reason you don't live in a crystal castle. A more salient criticism would highlight what they have been spending their time on, rather than pointing out that they haven't spent much time on something.
The logical thing to do is to build on what other languages did, and use established practices. Comparing what Rust did to golang when it comes to generics for example is enough to show the mentality of the golang authors refusing to look at established work.
I disagree, as I remember a lot of talks and articles from Go team members where they discuss in detail established work in other languages – on GC, on language evolution, on generics and so on – and deliberately learn from them to avoid same mistakes.
The fact is Go team learns from other implementations and it's clear from talks and articles. Another fact is that you're accusing them in having the "mindset of refusing to look at established work". Those two facts don't get along together, that's why I disagree.
First, Go had package management – it just has been optimized for monorepos.
Second, since early days of Go, they said that acknowledge their Google monorepo bias, and can't implement package manager that works for everybody without understanding what people outside of Google need.
They consciously gave time for community to grow and mature and get enough feedback before implementing proper solution, while actively studying package management solutions from other languages.
I definitely can't call it "refusing to learn from established work". Of course, they could've just copy existing suboptimal solutions without properly giving it a thought – that's what most of the languages do, after all – but that's where Go choses another path.
because implementing package manager well is not a trivial work. I had no problems with vendor (godep), later I used Dep and now modules. There are still some problems with module handling for vscode (gopls) but that will be sorted out over time. If you don't like the way Go is going, what stops you from using Java with maven, gradle or whatever...
Crisis is always a great opportunity as the pressure motivates people to ship, while also to do their best, to really consider any and all options, everything that might work.
Sometimes, of course, this does not work out for the best.
He doesn't build the whole argument on it. Generics just happens to be one of the most polarizing, most talked about subjects in the Go community. It's no surprise the author alludes to it. In his own words:
> PS: I like Go and have for a fair while now, and I'm basically okay with how the language has been evolving and how the Go core team has managed it. I certainly think it's a good idea to take things like generics slowly. But at the same time, how things developed around Go modules has left a bad taste in my mouth and I now can't imagine becoming a Go contributor myself, even for small trivial changes (to put it one way, I have no interest in knowing that I'm always going to be a second class citizen). I'll file bug reports, but that's it. The whole situation leaves me with ambiguous feelings, so I usually ignore it completely.
I'm going to risk being labeled an ~incompetent dev~ or whatever but learning golang was seriously a breath of fresh air compared to literally any language I have ever tried to grok before.
Everything felt like it was there on purpose. It always seemed like there was a "proper" way to achieve something. Being told to use this opinionated formatter was like removing a 40kg bag after a bush walk. You never have to worry about if you're writing Go "the right way", because it's extensively documented what that way is.
Generics is an awesome feature, writing C# is my day job so sometimes I miss it, but I have full faith in the designers when they say it will be in the language in a "Go appropriate way". The last thing I personally want to see is Go being handed over to the community to be designed by committee.
I completely agree. When I first started writing Go (coming from Python/C#) I complained an awful lot about what felt like pointless hamstringing of functionality. Go is a simple language, and you don't get many toys. It also feels very verbose at times, and forces you to think a lot about doing things which seem automatic in other languages.
However, as time went on, I noticed a few trends. Firstly, forcing me to think more revealed that what I had thought was automatic was more automagic - Go forces you to take responsibility for what your code does to a far greater extent than other more convenient languages, albeit not to the extent that C/Rust does. Secondly, I noticed that code written in Go tends to do what I expect it to pretty much all the time (with the occasional except of async stuff). Sitting down and writing a program in Go often results in something that actually works the first time it runs, and has far fewer runtime surprises.
As painful as it is to do numerical computation in Go sometimes, I have a very high level of confidence that I can look at a program, deduce with some accuracy its runtime memory usage & footprint, multithread it easily, and reason (successfully) about possible runtime failure modes and behaviour. This is something I find difficult if not borderline impossible to do in Python, especially utilising the standard '10 layers deep' stack of numerical computing libraries.
Interesting, that I saw a similar pattern like "at first I complained, but as time went on I found some benefits" quite a lot.
It can be that your learn a language better, became more comfortable with the way it must be used: say, stopped writing code in Elixir the way your used to write in Python.
But the other thing is that it's in our human nature that we tend to look for something positive in bad situations we exposed to for a long.
Say like, PHP was a fractal of shit, but when you use it for a long you will notice that it will make you more aware of what functions to use and how not to fall into some undocumented craphole, be more responsible, and do not take it for granted that some function would work flawlessly. The obvious benefit from the shitty situation.
That's like the idea of planting a bamboo plant, then jumping over it every day. It's a nice idea, but you won't ever be able to jump over a two story house. Instead, you'll eventually catch your foot and tumble. Likewise with the minefield. The result, is that you'll eventually step on a mine. At best, you can use a simulated minefield as an exercise, then use mine detection equipment and proceed with caution.
My enjoyment using any given language always tends to grow as I get more productive with it, even if I have a more general dislike for the language itself.
Making computers do things is fun (usually)! The programming language is (almost) immaterial - depending on the task at hand of course.
Despite myself, I've even found myself enjoying JS in the few times I had no choice to avoid it. shudder
That part about JS is pretty much the story of my career - I started out completely hating JS, was forced to use it enough to get more familiar with it, and eventually eventually started enjoying it to the point that it's now the primary language I write.
(I still hate it occasionally, but there's a lot more joy these days.)
The initial turning point was probably around the first release of jQuery...
[Javascript isn't actually that bad of a language. It really is about 90% of Scheme plus prototype inheritance, which takes some getting used to but ... eh, it's as good as any other option. The problem was that it was one of the battlefields between Microsoft and Netscape and has some really hideous scars in the landscape.]
I did not write this. My fingers did not type it. Nothing to see here. Move along. I'm a hedge.
Well, the basic parts are not that great. Plus, it has almost nothing of Scheme (nothing more than e.g. Python has of Lisp), that's just an old wives tale. It just has closures and that's it. Scheme is a Lisp, whereas Javascript isn't [1]. Not sure what Brendan had in mind when he was inspired by Scheme, but the end product is nothing like it.
Coercion rules are crazily bad. No integer type is stupid. Prototypal inheritance nobody much cared for (where nobody = very few). An empty array is not falsy. And several other stupid decisions besides.
The only reason it wasn't that bad, is that it wasn't big enough to be bad. It was just a handful of features plus a tiny (and bad) standard library (some math, some string stuff, etc). Everything else people had to build on top (and usually the did it badly).
This. Plus folks are always comparing to their previous experience, which is likely to still be a subjective and also hard to compare at all too. It's like apples (Go) vs oranges (C#) vs broccoli (Python).
Mostly machine learning for distributed sensor networks (ie: smart meters). Deal with lots of time-series data, state space modelling, some recurrent neural nets (etc). Our shop has a 'golang only' thing going, which means that I end up having to reimplement algorithms in go sometimes from scratch.
nice area, I would be interested to learn more into this type of problems. Recently, I became for interested in power electronics / smart grids / energy, and looking for the ways to get in touch with people working on such problems, learn more and join any company on these domains.
It depends on what you want to do as this industry is HUGE. Do you care about metering and working at the solar panel or wind farm level? Or maybe on the actual energy markets that commit and dispatch all generation in a region? Or maybe the vendors that write the software for those markets? Or you could work for the utilities or IPPs that own the generators...or the state public service commission that control state plans. There is also FERC and NERC. There are companies that sell energy storage systems...the list goes on and on.
I saw on the Adacore website a success story for another company that sells smart metering products, so it is nice to see all the work in this space. There are plenty of major vendors as well with smart meter products and the accompanying software.
What company do you work for by the way if you can say?
I agree, the greatest thing about Go is all the stuff it left out. Which is still annoying sometimes coming from more expressive languages (no exceptions! no generics!) But thanks to the simplicity and the common format standard it is so easy to read and understand. If I want to know about an edge case of a library function, I just dig through the code in my IDE until I have the answer. In most other languages I'll hesitate to do that, either because it's hard to read the code, or hard to access it.
I rarely have to think much about how to write the code itself, just about the actual problem that I'm solving. Once I know where I'm going, there's really only one way to write the code for it. Reviewing and using a co-workers code is also a breath of fresh air.
I think languages fall on a spectrum with regards to both typing and expressiveness, and it's not good to be on either end (e.g. PHP vs Scala or C vs C++) the designers of Go were very disciplined in walking that line and struck a great balance. I'd hate to see that undone by turning it over to a committee which results in a million compromises that turn a language into a Swiss army knife of features. It needs that strong guiding hand and the discipline to say no most of the time. Go has become my favorite language, I just wish I get to use it more in my work.
I rarely have to think much about how to write the code itself, just about the actual problem that I'm solving.
Bingo. This is also what the designers of Smalltalk, Ruby, and Python were trying to achieve. This is the opposite of C++, where I find that I'm thinking about the how all the time. (And at least 25% of the "agile" process time is spent on this activity in explicit reviews.)
> The last thing I personally want to see is Go being handed over to the community to be designed by committee.
I'm thankful for exactly that. Go is developed by Bell Labs people, the same people who bought us C, Unix and Plan 9 (Ken, Pike, RSC, et al). They took the time to think through all their decisions, the impacts of said decisions, along with keeping things as simple as possible. Basically, doing things right the first time and not bolting on features willy-nilly simply because the community wants them.
> Go is developed by Bell Labs people, the same people who bought us C, Unix and Plan 9 (Ken, Pike, RSC, et al).
Exactly and it comes in line with other research languages, namely Newsqueak and Limbo, both relying on channels for concurrency. I hope their other work will also find their way into every day usage though.
My experience is the same. I spent a couple of weeks playing with Go and had such an easy time with it. I can go back into the code I wrote and understand exactly what's going on. I can't say that about most languages I tried. I can even make sense of other people's code in large, complicated Golang codebases despite having little development experience.
This probably scares people whose livelihood depends on managing complexity in other languages. If anyone can get up to speed quick, anyone can potentially make the program that eliminates the need for that complexity.
I don't know that Google had this in mind while developing Golang, but they stand to benefit from commoditizing development. This is fine for someone like me who has zero interest in it as a career but does use a lot of scripts and plugins for creative work. Right now it's $10+ every time I want to do something with music/video/art where free or included stuff doesn't work or doesn't exist. If every DAW, video editing suite, and art/graphic design program had a scripting language as easy to use as Golang, I would never need to pay for add-ons.
The markets would still exist, but they wouldn't be as lucrative. Pricing would go from value to commodity.
> The last thing I personally want to see is Go being handed over to the community to be designed by committee.
There is a point about diversity to be made here. Different design models will each have their strengths and weaknesses, and the design spaces each opens up are not going to be fully explored if one model prevails. So I'm glad there's a language like golang with a coherent centrally-planned vision behind it in existence. It's also good to see more community-driven models get to do their thing. We'll see over time how each develops and what problems they best solve. It does seem to me to be a 'let 100 flowers bloom' type situation.
A bit of a bland centrist view perhaps, but with systems as complex as programming languages and their associated libraries, ecosystems and pragmatics, it's really hard to know what works. Best to experiment.
> Being told to use this opinionated formatter was like removing a 40kg bag after a bush walk.
I think the gofmt approach is becoming an unofficial standard. In the JS world 'prettier' has taken off and I think most languages now have a community anointed formatter (and new languages are likely to have an official one).
Strongly agree. Go feels great to write. I hesitate when designing a new project in C++ because I need to figure out the right / clever way to implement something. Go feels a lot more straight forward where I can just write it in the one way it's designed to allow. It gets me writing code a lot faster than other languages.
My experience is different. Believe it or not the type checker is actually inferior to Python + mypy. For example it is possible for variable to be nil, even when it is not a reference.
Types, it has int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64 and also float32 and float64.
If you use anything besides int, int64 or float64 you will just have a lot of fun casting.
math.max() and math.min() only work on float64, you can easily use it with different types, and actually it is discouraged to use it with integers. So you need to roll your own if you want to work with integers. But ok, it is math so when you do math apparently you should work on floats because that's what scientists do, but then why it doesn't work with float32? You actually have to cast like
result := float32(math.max(float64(left), float64(right))
If you want to convert a string to an integer, you have nice strconv.ParseInt() where you can specify among other things a bitSize, great, but the resulting type is still int64, you will need to cast it. What about other types, using them is a nightmare, if they were not supposed to be used why not just have int and float type?
If you try to implement a data structure that stores objects, you either have to duplicate the code, use interface {} (looks like that's most common, but then you no longer get help from a type checker) or use generator (this seems best to me, but it is just automation of the first approach).
I don't understand why Go is getting so much good opinions, it is not an enjoyable language to program. The supposed killer feature which were go routines and channels are kind of meh and nothing that you couldn't replicate in other languages. Seems like people like its simplicity, does that mean the other languages are overwhelming?
It's okay, I like elixir's take on things more and I question how suitable Go is for maintainability of large projects. But it's okay. It will improve as more developer tools and language features come online.
I personally don't see Go and Elixir's primary domains as being equal, or one a superset of the other. So there's some argument to be made about the region where they overlap, but for something inherently based around fault-tolerance and distribution, seems to me that code written to run on the BEAM will be smaller and clearer and therefore more maintainable.
If I'm interested in building low latency and highly available web services it seems to me that both Go and Elixir are reasonable choices. How are they not the same primary domain?
But in any case, Elixir allows a lot more clever code. In my experience working on legacy software, clever code in a dynamic language is error prone and hard to refactor and maintain. Static typing can help, though, I've found this especially true in functional languages where you have a super smart developer do something clever that's hard to understand 5 years later.
Because it is standard, in these discussions, for someone to quote Rob Pike to say that golang was designed for the lowest common denominator of developers.
Personally, I like go a lot for writing services and console applications.
A common HN trope is that Go doesn't have enough features (Generics usually) and gets in the way of '10x'/'competent' programmers expressing their genius - unlike Haskell (or some other advanced language for advanced minds)
What are the advantages of having a committee involved in the design of Go? In the case of C++, based on the threads that I read on HN, I see people being unhappy about the decisions taken by its committee.
> learning golang was seriously a breath of fresh air
This is completely irrelevant to the topic. Go, as currently implemented, could be the bee’s knees for the whole world for all I know; it’s not relevant to the discussion. To sidestep like you did and pretend that this is some form of criticism of Go itself is disingenuous hijacking of the discussion thread.
A slice isn't an array. A slice is a view into an array.
You don't look through a window in your house into the backyard, and plant a tree in the backyard by fiddling with the window. It's the same with arrays and slices in Go.
If you want to insert an item into a slice, insert it into the array (by copying to a new array and adding your new element to it while copying), then creating a new slice which includes your addition.
edit: (adding for clarity) In a lot of programming languages, whether they use slices or not, arrays are of a fixed size and must be copied to a new array if you want to add elements. Some languages have some syntax that makes it feel like you are modifying an array in-place, while doing the copy to a new array behind the scenes.
> edit: (adding for clarity) In a lot of programming languages, whether they use slices or not, arrays are of a fixed size and must be copied to a new array if you want to add elements. Some languages have some syntax that makes it feel like you are modifying an array in-place, while doing the copy to a new array behind the scenes.
It's that or some magic with larger-than-needed arrays that automatically grow by a bunch extra every time they hit their boundary to make appends faster, while blowing up memory use and making append performance unpredictable.
Lots of (especially) scripting language hide this behind automagic and you see tons of append-in-a-loop where it's not really necessary, as a result.
[EDIT] had insert two places I intended append. Me need coffee.
Yep, you're right, that's the transparently resizeable array thing, and it's exactly how Java's ArrayList class gives the feel of a resizeable array while it actually manages fixed-size backing arrays for you. That's why I linked the source to that class. :)
This is absolutely not best practice. It's perfectly idiomatic to insert an item into a slice (without the copy shenanigans you describe). The slice will manage the copy if necessary.
That's fine as long as you don't mind if the underlying array is modified. As the parent points out, a slice is a view into an array and there could be other views into the same array.
.... that's really it. Will it have the same backing array as it did before you did append? Maybe, maybe not. Should you care? Absolutely not, and if you do, you're probably doing something wrong.
Security consultant here. I have audited many codebases in many languages. Go is by far the easiest language to audit: it always looks the same, it's not too verbose, there are no generics or OOP. Coincidentally it's always the most secure as well. My take on this is that it is easier to see logic problems because it is easier to read, understand and reason about. On top of that the standard library does so much for you with crypto and security.
Generics and zero-cost abstraction can and are concepts that are often abused instead of used when it makes sense. I hope thay Go will never support generics because I sincerly believe it might mean the end of the language
I'm curious, how will they end Go itself? My understanding is that they always planned to have generics, they just didn't make 1.0 and now the Go community has sort of adopted the lack of them like a badge of honor.
Also, I have no expertise in your field so forgive me if this is a stupid question but wouldn't generics be easier to check over since they allow there to be only one implementation of something for all types versus in Go, a bunch of different implementations of the same thing that you have to check and might have subtle errors?
> "My understanding is that they always planned to have generics, they just didn't make 1.0 and now the Go community has sort of adopted the lack of them like a badge of honor."
As soon as Go adopts generics the community will turn on a dime and pretend they always loved generics (and perhaps even invented them.) That's how these sort of things typically play out. See also, copy/paste on iphones. When only android had it, copy/paste was a misfeature for idiots. When iOS implemented it, copy/paste became the best thing since sliced bread.
Pretty sure those are almost all of the most popular statically typed languages (absent C#, which has about the same verbosity as Java). I'm sure I'm wrong if you're willing to lower the threshold for "popular" enough.
I’ll opine that C# is much less verbose than Java in practice - language features like properties, reified generics, functional-like programming with Linq, tuples and its jilt-in support for async APIs mean you can be surprisingly succinct.
Agreed 1000%. Java is backward compared to C# - bolted on half baked features that are the opposite of elegant, a standard library that is built on the principle of 'why use 10 lines when a 1000 will do' and the tools/IDEs are a generation behind C#. As is the runtime speed.
I agree. It is very easy to learn and inspect. Sometimes I just look at function definition and it is immediately clear. This is so far the easiest to read language I've seen. I don't think generics has to be a problem if their use is very limited. I would like to see generics to work like code generators, something like using m4 to generate the similar type, sharing common methods but having a different core type in their implementation. It would be nightmware to have full C++like generics or even preprocessor macros.
void * is the C's equivalent of interface {} it has little to do with generics although it can be used to work around lack of it.
Perhaps there are other ways, but in C you have parametric polymorphism functionality through macros. That's why min() and max() are not functions but macros and they work with any type that can be compared, not just float64.
The article uses "Google" instead of individuals names to make the actions taken seem like sinister actions of a faceless corporation.
My interpretation is that Google employs a tight-knit group of people that work on Go and collectively are the BDFLs of the language. This isn't that much different from most large OSS projects, although it does seem likely that this core team weights the opinions of those that they interact with daily (ie, other Google employees) over people they barely know.
The last two paragraphs of the article address exactly that issue - that it's hard to tell whether the direction of Go development is decided solely by the Go core team or by Google as a corporation.
Which is nonsense, and is equivalent in this case to "i never bothered to ask so i'm just going to assert some stuff that agrees with my viewpoint".
They could have just asked.
In fact I can answer this for you, since I was the relevant director (IE Go directly reported to me)
It was driven by the core team, and more particularly, the leads and what they want to be trying to do.
I have provided precisely 0% of the vision there.
Further, the org/etc they belong to has changed (a very small number of times) over the past 10+ years depending on the Go team goals, not depending on Google's goals.
(IE when their goals have changed, or the org goals changed, Google has put them in a place that aligns with their goals, not tried to align them to the goals of the org they belong to)
People are stepping around trying not to offend anyone, but it is no secret that Go was created for Google to solve their needs, it supposed to be a simple language that even a fresh graduate out of school could pick it up quickly and work with it. It is also very opinionated for example regarding the formatting or (at least initially before vgo was introduced due to outside pressure) to work well only in a mono repo scenarios.
Google makes it free to use, so it will be easier to recruit people that already know it. Many companies do that as well, the difference is that still generally no one uses their languages. This one is different, because well it's Google. The Go itself isn't really that great language, but there's a lot of hype behind it. I wonder when it will die out, but I guess it will be a while.
Google is getting tremendous value from Go, on multiple fronts.
For reference, Go was meant as an alternative to Java and C++, to develop distributed systems. Given the direction that Java went, acquisition by Oracle and $10B lawsuit against Google, it is a well worth having an alternative.
Go team is actually incredibly open and approachable if you meet them at conferences. Google (as a company) has little influence on the language design design.
Its needs has shaped design (obviously), but it's not like there are requirements outside of the Go team. Go is heavily used in Google, so it's a natural dogfooding process, but that's it.
I don’t really get this distinction. “Google as a corporation” isn’t a thing, it’s a collection of people, some of whom happen to be the Go core team. It’s likely that due to being close to Go developers at Google there’s a bias towards implementing features that would help those people, but I very much doubt the subject of genetics in Go comes up at board meetings.
Implementing features Google needs sooner, rather than later, is one thing, but at some point, Google-the-profit-driven-corporation's needs will contradict the broader Golang community's needs, at which point the question is who "wins"?
What will, on a long enough timeline, come up in board meetings, especially as Google fails to meet Wall Street analyst's expectations, and as the ad-tech space evolves, is how much of Google-the-corporation's money to continue plowing into broader community things, like Golang at all.
Hopefully, by the time that happens, the community will be strong enough to persist, and I use Golang professionally, so I have personal investment for that to be true, but the possible eventuality that it'll end up being in the situation Java is currently in, makes me nervous.
Google the corporation is represented by upper management, the people by developers. The question is if upper management at Google takes any influence on the development process.
I don't know the real relationship between Google and Go, but Go is very much a product of its current core team, who are (AFAIK) all veterans of Bell Labs (Robert Griesemer?) and all have the same Bell Labs approach.
Bell Labs invented Not Invented Here syndrome. You can know this to be true because there is no way this group of people could have the syndrome so bad any other way. The other side of the coin is that they are very, very good. You can know that because they do a lot of interesting, novel work without obviously (or obviously without) having looked at any other research in that field.
Take for example, Chris's comment, "(The most clear and obvious illustration of this is what happened with Go modules, where one member of Google's Go core team discarded the entire system the outside Go community had been working on in favour of a relatively radically different model. See eg for one version of this history.)"
The second link there is to https://peter.bourgon.org/blog/2018/07/27/a-response-about-d... From that: "[RSC and the dep-pies] discussed dep at the GopherCon contributor summit. Matt Farina says that [Russ Cox] said [he] could do better if [he] went off on [his] own and built something. ... The clear impression was that Russ wasn’t going to engage with the committee, the research [the committee] had prepared for him, or the prototype product of that research, dep — at least, not yet. The clear impression was that Russ had his own hypothesis about what a solution would look like, and that he was interested in validating those hypotheses, by himself. ... Russ decided to implement his ideas on his own, and make a proposal without us, and without telling us that’s what he was doing until it was essentially done."
This is what I'm talking about. If your ideas suitably mesh with their philosophy, they may be adopted. If they do not, the Bell Labs team will ignore them completely. And if they think the problem is a problem (and they don't, in many, many cases), they are quite capable of doing an end-run around you and producing a solution which satisfies their perception of the problem.
Go may or may not be Google's language. Go is the Go-lang team's language and you will go where they want you to go, to adapt MS's old slogan. To an extent, it's similar to Perl; one's success as a Perl programmer depends entirely on your ability to hold your mouth right and successfully simulate Larry Wall. Perl is not a DWIM language, it's a do-what-Larry-would-mean-if-he-wrote-what-you-wrote language.
[^1] I myself do not endorse the technical ideas in that post. "[Something] does not support using multiple major versions of a program in a single build. This alone is a complete showstopper." is true. The fact that most tools don't do it---and have to live with the resulting pain---simply means that it's hard, not that it's not necessary.
For what it's worth I'm a semi-grey beard (20 years in) and
I love golang. For me it was like going back to being 8 years old on my Commodore Plus/4 and really enjoying writing code again.
It needs close parenting. Java has been ruined by the push to include everyone's pet feature.
> Java has only things that were proved to work in other languages.
But they still somehow keep finding ways to make them not work so well when implemented in Java.
C# may move faster, but its design team is also much more methodical about ensuring that new features have good ergonomics. In Java, I tend to feel surrounded by hacks that were hastily slapped on in an effort to keep up with C# and, increasingly, Kotlin.
Idk, have you seen the interfaces with default implementations in latest C#? Also duck typing? Both are mistakes IMO. First missteps I feel like I've seen C# make.
If by "duck typing" you mean dynamic, then I don't know what you're complaining about. It has a very niche set of use cases where it is needed. If people are abusing it then it's on them. There is no good or even alluring reason to use dynamic outside of it's intended purpose, so I don't feel like it's one of those "shiny hut dangerous" features you see in some other languages.
Hmm, I do a lot of C# programming, including very language-y low-level stuff, and I'm not sure I completely agree.
I appreciate that by moving faster they get more stuff into more hands faster, but they definitely have a lot of hackish solutions with poor ergnomics outside of the narrow scope they were originally intended for.
If you will: the language features have a clear purpose but a general implementation; and outside of the narrow purpose the designs usually feel pretty poor.
E.g.:
- LINQ/expression trees don't support most of the C# language, and new language features are usually without equivalent expression tree. This isn't a full lisp or F# style quotation, but a pretty narrow window that's not easy to use outside of linq-to-sql style usages.
- LINQ trees are again intrinsically inefficient, since the expression trees compile not to a statically shared expression, but to a bunch of constructors (i.e. looping over even a medium sized expression is bound to be slow); and they're not equatable, so it takes a lot of effort for a consumer to detect this case leading to overly complex (and hard to reproduce correctly) hacks inside stuff like EF.
- LINQ is restricted, but the restrictions are fixed, not customizable. That makes it a poor fit for DSLs, including stuff like Entity Framework, because there are usually lots of expressions your DSL can't support, but there's no way of communicating that to the user. Also, if you use expressions as DSL, you need to follow C# semantics, which isn't trivial; witness gotchas in ORMs surround dealing with null and equality.
- lambdas are either delegates or expressions; not both, and this isn't resolved via the normal type system, but by special compiler rules, making it hard to do both, and leading to type inference issues such as that var f = (int a) => a + 1; cannot compile.
- Roslyn: very poorly documented, and ironically very dynamically typed to the point that many casts or type-switches are necessary but finding out what types there are and what they do is generally a matter of trial and error since the docs aren't great. Ergonomics are poor in other ways too; e.g. dotnet is xplat, but the build-api is not - i.e. it's clearly not dogfooded. Also: totally not integrated with expression trees, which is at least mildly surprising.
- string interpolations are unfortunately quite restrictive (compare with e.g. javascript, where this was implememented much better), and intrinsically and unnecessarily inefficient (at least 2 extra heap allocations, and usually lots of boxing, and the parsing the compiler necessarily must do is not exposed in any kind of object tree, but instead reserialized to string.Format compatible syntax necessitating re-parsing at run-time). Also, like expression trees, this was really hacked into the language, so, e.g. you can't participate in other normal C# features like overload resolution the way you might expect, extension methods plain don't work, culture-sensitivity can be a gotcha: basically this works for immediately evaluated expression, but is tricky elsewhere.
- razor (not strictly C#) is hugely complex, and has a very impractical underlying model. Compared with e.g. JSX which is trivial is (ab)use creatively, and which uses mostly language-native constructs for control flow, razor makes it impossible to use even basic features like methods to extract bits of common code; lots of basic programming features are reimplemented differently. Instead of passing a lambda or whatever, you have to deal with vaguely equivalent yet needlessly different stuff like partials + tag helpers.
- optional parameters are kind of a mess (no way to enforce named args, no way to cleanly wrap optionals, restriction on compile-time constant, interaction with overloads can be suprising); tuples are too (names are dealt with differently than everything else in the language, no syntax for empty or 1-elem tuples, no way to interpret arg lists as tuples or vice-versa, no checks on nasty naming errors like swapping order); equality is a mess (how many kinds are there again?), lots of apis are disposable but should not be disposed but for others it's critical, no good way to compose disposables, huge ever expanding api without practical deprecation path is a pitfall for newbs, no partial type inference for generics, no unification of all the various func-and-action variations means billions of pointless overloads (and sometimes per-API ways around it); tuples and anonymous objects are sort of redundant, but not entirely; no good way of implementing equality/hashcode/comparability and yet easy way to detect misused non-equatable types.
I mean, I respect their choices here, and there's a tradeoff with lots of benefit too: they're really quite fast-moving, and I want those new features ;-). But it's not without costs; they definitely aren't "much more methodical" or anything like that.
Thank - I hope I don't come across as too bitter - I really do think there's an upside to all those limitations. I'm just past the exuberance of thinking that because it's so actively developed, that all these flaws are eventually fixable. It's a fast lifecycle, and probably at some point it'll be too impractical to continue as is, and then we'll just jump ship to some slimmed down alternate with a good transition story - and that's fine. So far: so good.
C# has made some serious mistakes: reified generics (which has basically destroyed simple language interop on the CLR and makes it an unattractive target for language implementors), and recently, async/await. Both of these help in some ways, but have costs that are higher than the benefit and much better alternatives.
Java is not trying to "keep up." It is intentionally slow-moving and conservative (this design goal was set by James Gosling when Java was first created), and only adds features once they have been proven in other languages for a while.
As former Java dev, returned into .NET world, I don't consider it a mistake, the CLR was designed with multiple languages in mind, and there are plenty of options available, even if Scala devs failed at that.
On the other hand, what I consider a major mistake from Java side was ignoring value types and AOT compilation since it's inception.
Had Sun blessed such features since the beginning, and many use cases for C and C++ wouldn't be necessary.
Value types add complexity and they weren't necessary in 1995. They only became necessary due to hardware changes circa 2005. Similarly, AOT compilation has only become attractive for the kinds of applications people use Java for only recently, when startup time became important for serverless. The lack of neither has caused Java lasting damage; what has is the domination of the browser on the client, but that has affected all languages.
As to baking variance into the runtime, I think this is just a bad idea, which is so far used only in C++ and .NET, two languages/runtimes with notoriously bad interop (it's not just Scala; Python and Clojure have a similarly bad time on the CLR, as would any language not specifically built for .NET's variance model). It is simply impossible to share a lot of library code and data across languages with different variance models once a particular one is baked into the platform. This is too high a price for a minor convenience.
Specialization for value types (which are invariant), is another matter, and, indeed, it is planned for Java. Perhaps some opt-in reification for variant types has its place, but not across the board. I am not aware of other platforms that followed in .NET's misplaced footsteps in that regard. Those that are known for good interop -- Java, JS and LLVM, don't have reified generics.
What's worse is that it's a mistake that cannot be unmade or resolved at the frontend language level. Even Java's big mistakes (like finalizers, how serialization is implemented, nullability and native monitors) are much more easily fixed.
Value types aren't "necessary", but they would have been valueable at day 1. The GC heap is simply inefficient; not necessarily because of the GC (which indeed is harder with massive multicore), but simply because of the per-object memory overhead.
There's a reason java had built-in value types from day 1, because it made sense even back then.
Frankly, I think both java and C# kind of got this wrong. There was an overreaction against the C/C++ of the day, and whereas the GC turned out brilliant, the idea that it's not even necessary to express the notion of references/pointers/values etc. was too much; and the idea of a single type system root (object) is similarly dubious, and then particularly the idea that that root type isn't the empty type. Object has semantics, and that was a mistake, because it contributes to the bloat. I'm totally happy with ignoring those features 99.9% of the time, but having them completely unavailable makes those 0.1% cases extremely expensive. (I mean, I think those things are slightly changing, but it's slow going).
That was necessary for performance back then. User-defined value types weren't, and Java has done well without them.
> Object has semantics, and that was a mistake, because it contributes to the bloat.
I think most of the RAM bloat is due to the GC trading off extra RAM for speed rather than object headers, and I'm not sure trading off complexity for headers was right 25 years ago (JS is doing fine on the client without value types). What changed was the performance characteristics.
As to object semantics, it may be a fixable mistake. The goal is to get value types without today's object semantics while preserving a single class hierarchy at the same time. The Valhalla team thinks that's achievable.
Heap (over)use by the GC is effectively a scaling factor. How large the underlying objects are remains remains relevant: if your objects are twice the size necessary, the GC will "bloat" that further - and this tradeoff isn't entirely GC specific, other allocators such as those used to implement malloc/free have related tradeoffs to make; free() won't release memory to the OS either (and memory, released or not, may end up evicted from RAM anyhow).
Of course it is relevant. I'm just saying it isn't the decisive factor that makes this an absolute necessity, as evidenced by the fact that much the backbone of the largest software services in the world is Java. There are lots and lots of tradeoffs in runtime design, and it's important to look at the whole rather than at one decision in isolation and point out that it's important. As a whole, the criticality of value types for Java is relatively recent.
I advise you to read Mesa/Cedar report on the impact of garbage collection algorithms available at Xerox PARC bitsavers archive, EthOS or SpinOS experience with Modula-3.
All of them refer that having value types alongside GC had a relevant impact improving performance.
All systems designed before Java was a thing.
Or since you refer to JS, the paper about SELF's design.
Even Dylan was designed with AOT/JIT and value types support, which is relevant here given that its domain was being a systems language for the Newton. That politics killed it is another matter.
Oh, I don't deny that value types would have helped performance back in '95, just that they were absolutely essential for Java. Smalltalk/Self and Scheme/CL didn't have them, and those were probably Java's greatest influences; I don't think VB had them, either. Also, in its first four years, before HotSpot was ready, Java was interpreted, so it had bigger performance problems.
I don't know why there was no emphasis on AOT back then. I guess they started with interpreter/JIT, and then there just wasn't much demand for AOT until now.
What user-defined value types did CL have in '95? Also, are you sure about VB having had them then?
As for AOT, there may not have been sufficient demand from Sun/Oracle. I only joined relatively recently, but we generally do expensive things only if we believe they have a huge benefit or in huge demand, and we believe it can be long-lasting. The assumption is that any new feature will require maintenance for 20 years, taking away resources from other things. So if something is expensive, even if it's cool or some people could find it very useful -- we don't do it. The assumption is that the ecosystem is large enough that others can, and will.
Well, we can disagree about when AOT and value types became critical for Java (and I would argue that they clearly weren't back then because Java has done spectacularly without them), but Java is getting both soon.
> Meanwhile I can already enjoy them elsewhere. :(
That's perfectly fine. We think that our priorities are right for the workloads Java is used for (e.g. people care more about a low-latency GC like ZGC, and deep low-overhead in-production profiling, like JFR, than about AOT).
Mutable structs are not exactly value types, but Microsoft has always preferred control over simplicity (after all, they pushed C++ really hard). I won't say whether that philosophy is right or wrong, but it is very different from Java's.
There's a difference between useful, and even very useful, and absolutely necessary. Clearly value types weren't absolutely necessary, as Java did well without them (and JS still does).
Gosling said that his goal was to have nothing you can somehow live without (I don't know how well early Java lived up that ideal, but that was the ideal). Hardware changes made user-defined value types absolutely necessary for workloads Java wants to target.
Being there since the beginning, I wrote my first Java game in 1996, early Java only did well thanks to Sun's marketing weight and it being free beer vs the alternative of having to pay for a compiler like Delphi.
I was around, too, and I don't think that was at all the full story. Marketing has never been solely responsible for the long-term success of any product. There were other languages that were very heavily marketed: VB and C++ (and FoxPro, too, I think) by Microsoft, Delphi, and about a million other RAD tools. Being free was one of the reasons, but so was targeting the web, and Gosling's design of wrapping a JVM that gave people what they wanted (dynamic linking, fast compilation, garbage collection) wrapped in a language that felt familiar and non-threatening. I don't remember what were Delphi's issues, but a big project I wasn't involved with at the same organization I was working at circa 2002 (I was all C++ back then) did Java on the server and Delphi on the client. Maybe Delphi didn't have a good server-side story?
Async/await is fantastic compared to not having anything at all. It's a big downside compared to other things you can do (cf Go, Erlang), and hard to get rid of. It's the classic case of getting easy short-term benefits at the expense of long-term costs. It's main benefit from an implementor's perspective is that it's better than nothing and very cheap to implement quickly. Just as .NET has lived to regret reified generics[1], it will live to regret async/await.
[1]: Maybe not C# programmers, but there are easier ways to do a single-language runtime.
The other main alternative to `async/await` with the Promise<T>/Task<T>/future<T>-paradigm is Rx's Observable - but let's not pretend that because Observable<T> is capable of handling every situation that Promises can doesn't mean we should use it everywhere - Angular tried that when they changed their HTTP client library to use Observable<T> instead of Promise<T> because they wanted to expose retries and other nifty logic - but in doing-so made the learning curve a vertical brick-wall for everyone involved (and now we can use Promise<T> with support for retries and better error-handling anyway) in addition to adding a very hard dependency on a fast-moving project (e.g. Angular 6 comes with a load of RxJS compatibility shims because RxJS radically changed their API design (again)).
Go's goroutines seem okay - but I don't like how much control they take away from the programmer. For example, last year I worked on calling-into a black-box C DLL from a Go program and we learned the C DLL had code that was actually simply terminating the thread inside of it (by design!) because the author of the C DLL assumed ownership of the thread. That caused a problem for us because Go's goroutines are scheduled by the Go runtime and it will never let you give-up ownership of a Go thread - and I couldn't see how I could use my own thread (e.g. getting a thread from a native OS call to keep it outside of Go's control) with goroutines. The project was almost DOA after we learned this, fortunately we convinced the author to always return instead of killing the thread. I'm not sure if anything's changed in Go since then that would have made things easier for us. But since then we haven't used Go for anything new. The only reason we used Go was because it gave us binaries that "just worked" for Windows, macOS and Linux without having to worry about Java, .NET and other dependencies - but I wasn't happy about the ~20-30MB-sized executable output.
What we're doing in Java is letting you choose, for each sequential computation, whether you want a heavyweight (kernel) thread or a lightweight usermode thread (like a goroutine), and if you choose the latter, you can use your own scheduler (schedulers are written in Java, and aren't a part of the runtime). No promises, no observables, no async/await, and no thread control issues.
C#'s language is much better designed IMO. Can anyone compare LINQ and Java's streams and not pick LINQ? Feels much sloppier in Java and Java came second.
That's probably what I like most about it. But that aside, the naming of tasks seems much more consistent in C# than Java. Java already had streams and maps and mangling those names makes searching for documentation a pain.
> It needs close parenting. Java has been ruined by the push to include everyone's pet feature.
Oracle is moving to a faster cycle of development. There are some of us who strongly feel that some of their decisions are based less on what's best for the language and more on catering to the popular-and-loud crowd. I'll never forgive the addition of `var` to the language.
That this very thread exists suggests a certain “C++ ification” that happens to languages.
I really respect the slowness of the go maintainers in adding new stuff. I also suggest that we all ponder our tooling some; Writing java with emacs or vi is a materially different experience than using eclipse or idea and var style type-inference seems almost silly with those tools which do it for you.
>Writing java with emacs or vi is a materially different experience than using eclipse or idea and var style type-inference seems almost silly with those tools which do it for you.
It's not so much the extra typing that's the problem, it's the extra reading. All the stuttering is visual noise.
This. If you want to revolutionize the profession, come up with something that helps with reading as much as modern IDEs help with writing. My answer is that boilerplate should be generated somewhere else and largely ignored.
IMO, boilerplate source code shouldn’t be generated at all — the tool chain should directly emit the required object code. And code generation shouldn’t require a different language — or special comment syntax.
Depends on the toolchain. Everybody knows how to generate ugly source files, but it takes more effort to add AST nodes during compilation (or symbol table entries with types plus object code) and might lead to errors nobody understands how to fix (because you can't read the declaration of the thing you're trying to interact with).
>I'll never forgive the addition of `var` to the language.
I'm inexperienced with Java and didn't know this existed until I saw your post. It seems like a nice shorthand to me. Can you explain why you don't like it?
Misconceptions mostly. Java developers are some of the most conservative developers around.
And there you have the answer to why Java hasn't evolved that much, or when it did, why it needed to care deeply about backwards compatibility at the source level. It's because Java developers want it that way.
The irony is that people are now abusing "aspects" and "dependency injection" via frameworks like Spring that bring everything but the kitchen sink, but then the language becomes effectively dynamic, as via those annotations all static type safety goes out the window.
Therefore I find it interesting when Java developers complain about Var, because the ecosystem has in my opinion bigger problems. Compared with annotations Var isn't a problem because Var is statically checked, so here we have a clear case of missing the forest from the trees.
> Java developers are some of the most conservative developers around.
You're right, there are loads of conservative Java developers. It's one of the the things that makes me love using the language.
> The irony is that people are now abusing "aspects" and "dependency injection" via frameworks like Spring that bring everything but the kitchen sink, but then the language becomes effectively dynamic, as via those annotations all static type safety goes out the window.
> Misconceptions mostly.
But drop the strawman argument and borderline ad hominem. It'll do you better.
Spring is terrible in that sense, and you do find professionals arguing for Spring and strongly typed language. That said it's not an argument I've ever heard before being part of a Spring centric shop.
I'm of the mind that it is un-Java like. Whether or not there is a "Java" as a philosophy is not the hill I'm trying to die on.
Consider these contrived lines of code:
```
String first = someMethodCall();
var second = someMethodCall();
```
The first provides more useful information at a glance. I don't see any value in the "nice shorthand." Typing out "SomeStupidClassName" has never once been a material bottleneck in my 15+ years of programming, but now we have this new option that caters to the lazy, and in doing so makes life harder. Now I have to either ban it, embrace it, or come up with some ruleset around when you can and can't use it. Why? Someone can't be bothered to type a few extra characters.
It reminds of my grandfather, a former professional ball player, but one who played back in the days where there weren't these multi-million dollar celebrity ballplayers pissing and moaning in the press about just how hard their life is. He used to call those types "high-priced cry-babies," and I really feel a tinge of that in dealing with folks who just wholly embrace `var` and give folks like me shit for having criticisms of it. Perhaps that's just my old blue-collar showing but your convenience in writing a handful of characters simple will never enter into my considerations.
I love using Java, I love the addition of things like streams, the Optional type, etc. My sibling comment is a little right, and very wrong. Lots of Java developers have a certain conservatism about them, I'm mostly certainly one. But there are large reasons to hate it.
> I don't see any value in the "nice shorthand." Typing out "SomeStupidClassName" has never once been a material bottleneck in my 15+ years of programming
There are a few rather glaring spots that I've noticed.
First, when you're refactoring, you've now got to edit every spot where a variable of that type is created. At the very least, when you're just renaming a class, your IDE can help you, but you still create a lot of diff noise. At worst, when you're splitting up a class or otherwise shifting responsibilities, you may end up with a whole lot of yak to shave. This is not just an annoyance; it's a latent code quality problem, because it creates a disincentive to clean things up.
Second, I've seen it become an impediment to writing clean code in the first place. I have encountered situations where it's clear that the author wrote
because creating intermediate variables would have meant having to type out (and burn precious screen real estate on) some ridiculous set of 60-character generic type names.
I've even seen it result in situations where data gets copied or otherwise processed excessively, because the explicit type annotation resulted in an upcast that shed some useful feature that a subsequent developer shimmed back in because they trusted the explicit type annotation and not a function's actual return type.
So yeah, I decry your assertion that this feature is about being lazy. This feature is, at least for me, about code quality.
It would appear that all of your code quality arguments are "people are too lazy to actually write good code." So it's not clear why you would decry that assertion. I don't have a horse in the race one way or another, but you're not refuting mieseratte's objection.
Sure, in a contrived example where the return type is not obvious, you perhaps shouldn't use var. What about real world examples where the type is more often than not obvious?
Generics should have been in Java (and Go) from the beginning.
Those surely are not the proof that Java adds "everybody's favorite feature". I think the parent means the newest Oracle projects (Valhalla, modules, value types, streams, and so on).
(Technically, I think Java modules have been floating around in weird, likely broken suggestions since before Oracle bought Sun. As far as I could ever see, the primary design constraint was always, "NOT OSGi!")
And they failed. They did the best they could, but the fact that they were added onto the language later (plus the desire for backwards compatibility) means there are lots of gaps and warts in what they wound up implementing.
At its root, the real problem here isn't "reification good" vs "reification bad", per se. Haskell has an excellent implementation of generics, and erases types far more aggressively than Java does. C# also has a very good implementation of generics, this time based on reification.
The problem is more that Java's particular mix of design decisions resulted in a language that operates at cross purposes with itself. Once upon a time, back in the beginning, Java was a reflective language. Being reflective requires type information to be available at run time, though. When Java decided to use type erasure in its implementation of generics, they created a really bad set of interactions: They kneecapped reflection, so now you can no longer call Java truly reflective; it's only partially reflective. You can no longer effectively and accurately reflect on what have come to be some of the most-used classes in the language. And, at the same time, they forever sealed a rather important corner of the type hierarchy off from generics. They also delayed a bunch of type checking until run time - after types have been erased - so that certain things can just never be made to cleanly type check. Meaning you also can't say Java is any more than partially generic.
IIRC, the choice was between Java 5's version of erasure or drastically modifying the JVM, with the likely result that the new Java would be incompatible with the old Java. (Like C# has done.) This was considered unacceptable at the time. (Unlike C#.)
Erasure is important in allowing interop from other JVM languages. Reified generics would be nice from the perspective of just writing Java, but the interop story on the JVM is one of its best selling points.
I'm of two minds of that, these days. I came from C# so of course reified generics were of course better, of course--but these days I would rather have them in Java more and in C# less. I often find myself wanting to write the moral equivalent of `IFoo<?>` in C# and end up having to have two separate interfaces, etc. just to have a way to handle a list of a thing that I end up working on in an abstract manner. (Though I'd caveat that that is more of a gamedev-related concern than in Java/Kotlin, which I write for work.)
I do appreciate, though, that when Microsoft decided to do generics for C#, they did so decisively. These days, when C# gets a new feature, it seems like it's the complete opposite of decisively delivered.
I don't mind that. It can even be an aid to organization - all the generic stuff goes in the generic class, all the stuff that doesn't rely on that can go in the base class. But it would be nice to use something like <?>. Too bad generics don't inherit implicit casts, like A<int> to A<object>.
I do mean that, and I do mind it a lot when I'm so used to just being able to erase the generic.
There are performance implications to type erasure, to be sure, but when our computers are mostly all future machines from beyond the moon, I'm more interested in minimizing the impedance between my brain and a solved problem.
This is the only non-bad consequence of type erasure I'm aware of. On the other hand, a lot of code I've written in C# would be impossible or severely hacky without type retention, like "new T()", "T is Thing", finding all classes that are derived from T, etc.
TBH, I'm just as happy passing a factory method in for that sort of thing. Because it allows you to do both and pick the one that makes the most sense for you in a given situation.
It's not about being careful (they are--but always with the baggage of backwards compatibility), it's about not having a soul.
Java has made a U-turn in adding streams and related functional features on top of a language that used to be strongly for OOP (actually defining the meaning of OOP for a generation of developers.)
These different paradigms together make for code that does not read the same no matter who writes it.
I love how Go code usually ends up being extremely similar, no matter who write is. (Actually, Kubernetes is a counter example for this: Go should have gone even further in forcing style.)
If you think that "all code reads the same" is detrimental to developers, you are conflating the idea of developer (problem solver) to that of coder (keyboard typist.)
> Java has made a U-turn in adding streams and related functional features on top of a language that used to be strongly for OOP (actually defining the meaning of OOP for a generation of developers.)
So, by adding a feature which works extremely well with OO and enhances the language they have no soul? That doesn't make any sense. Javas soul is being a blue collar language. It leaves the experiments to other (JVM) languages and takes the parts which have been shown in the field to be useful for many cases.
Go on the other hand is a half-finished Java, produced for the sake of saying "We are Google, OF COURSE we have our own language".
I think what dullgiulio is getting at is when working with large, code bases and teams Java's multiple paradigms leads to more difficult code to maintain. I've been experiencing this on my current project and its completely new development.
One package is written in the "new" functional style, another is written in "old" object oriented style, other parts use classes for nothing more than name spaces to house static methods. The reality is it's already a mess.
Java is a blue collar language in the eyes of people interested in language design and to be fair, some of them recognize its merits. The majority of programmers care more about their domain in which they are working and less about the language used in that domain.
This is absolutely the biggest wart in modern Java. We now wrap all methods that can throw checked exceptions to rethrow unchecked. This works well for web services that can just retry or rollback and report an error to the client on transient exceptions, but is insufficient for many applications.
> Java has made a U-turn in adding streams and related functional features on top of a language that used to be strongly for OOP (actually defining the meaning of OOP for a generation of developers.)
But isn't that just an inevitable outcome of aging? The only way to never age is dying young.
And practical use of Java had stopped being textbook OOP (which means using classes to model the world and putting your logic where the data is) long before the streams API etc came along. Actually the shift away from textbook OOP integration between logic and data already started when the beans pattern drowned us in getters and setters, which already happened when Java generics were still the Pizza language sitting as a draft on Odersky's desk. I even suspect that lack of generics was a confounding factor in that shift, because when you look at a pre-generics codebase (which I happened to do just yesterday) you will find that the brave men and women back then spent an enormous amount of boilerplate just for hiding untyped collections behind typed facades which is a form of using OOP features (meant for modeling the world) for program structure, which is basically what post-textbook-OOP Java is all about.
A design philosophy. Some sort of measurable practice that influences design. This is not limited to Java. Design by committee has been destroying the maintainability of many languages.
"Some sort of measurable practice", not anything remotely like a "soul", in fact invoking "you have no soul" usually means you lost the argument and are now desperate enough to say anything.
Streams are inspired by some idioms of functional programming. But they are not functional, and they cannot be made to be functional, because it is impossible to evaluate one without causing a rather glaring side effect.
Hey ! Plus/4 ! Finally someone who knows it too :) Since it was missing the C64 sprite abilities it made it even more interesting to code something cool on it. Though at least it could play some games like Ricky Rockman. :)
> For me it was like going back to being 8 years old on my Commodore Plus/4 and really enjoying writing code again.
Your comment is the problem with the Go community. I have seen a number of comments from Golangers that they want a "fun" language that helps them reminisce about the past. They also want to write a lot of senseless boilerplate because for them more typing is somehow about them reliving their past. And tracking down nil pointers... and writing containers for every concrete type.
The simple fact is software development has gotten more complex because business requirements have changed and Go does a poor job of addressing that with its limited feature set. The rest of the programming world has accepted that we need better tools whether it be toolchain stuff or language features. Hate on Java all you want but at least it, like most other non-Go languages, has realized the need for better tools in the toolbox.
Sure, and the "right tool for the right job" is still my mantra. Go is a very good at solving for an incredible amount of tasks in many problem spaces. Users will always want to bend tools to work in new places, and that's okay -- sometimes it isn't a fit.
Have some business requirements that make using Go a chore or a pain? Use a different language, or restructure the requirements.
> Have some business requirements that make using Go a chore or a pain? Use a different language, or restructure the requirements.
That's such a cop out answer when our industry is basically doing fad-oriented engineering. It's great when you can greenfield build a project but when you're taking over a project the selection of language has usually been decided. Or you know the management team has decided for hirability reasons to use X. Or just legacy requirements don't match reality eventually.
This is why we need more expressive languages than Go. Requirements change over the course of a projects life and what made sense in year 1 rarely makes sense several years later.
Google are bad at building developer communities (or at the very least they don't care about it much). They build things for themselves and their way is the Ivory Tower way of running a community. The overlords decide and their word is set in stone, the plebs should just accept the fact that their concerns and use cases are just too trivial and they should listen to the smart people at google and do it their way, which is the only way.
This isn't the first time google open sources internal tools, trying to build a community but really ignoring them completely. GWT, Closure compiler and of course Angular comes to mind.
Angular built a great momentum and community, and the angular team at google basically ignored most ongoing concerns to work on their next big project that'll fix everything (first it was Object.observe, then dart then angular 2).
Contrast google's handling of Angular to Facebook's handling of React (and react-native) - The routinely incorporate community influencers into the core team, they include other major corporations in their decisions and community, actively engage in developer relations to get feedback, explain controversial choices and build a community.
Sun's model with Java is even more different - incorporating major stakehodlers in the language into the actual decision process via the JCP.
Of course Google isn't the only ones who are bad at building developer communities around their open source. Apple and Amazon barely even try.
If google is Ivory tower, Microsoft is the Herbalife way of building a developer community - actively supporting influencers, providing official seals of approval and using a top down hierarchy of advocates. They do listen to community input a lot more, but Microsoft is still the overlords of all their projects.
I'm so tired of people purporting to speak for "the community" - especially when they diametrically oppose my own views. It feels a lot like they are co-opting me for their own agenda while simultaneously excluding me.
The things mentioned as evidence that community doesn't matter have a lot of buy-in from the community. Modules in particular are an effort that - at least from what I can tell - is heavily driven by non-Googlers (in particular Rog Pepe, Paul Jolly and Daniel Marti are people who put a lot of work into making modules actually work for practical workloads).
These kinds of pieces only make sense if you have an extremely limited view of who is or is not part of "the community" - in particular, if you throw everyone agreeing with the Go team out of that bucket.
Haskell is an excellent example of a community-driven language. It's more mature and advanced than most commercial offerings too, offering a superior type system, fast and efficient executables, lightweight fiber concurrency, software transactional memory, higher-kinded parametric polymorphism and many more features.
It's interesting that you bring this up, because I'd consider Go and Haskell as almost polar opposites. Go is a simple language which lacks expressibility but with strong opinions on almost everything from formatting to architecture, which leads to a streamlined (and refreshing) developer experience.
Haskell is a complex language, with an expressive type system giving you more tools and guarantees but I would call the learning / dev experience everything but streamlined.
I wonder whether the difference in the organisational structure (single entity vs community) manifests in the characteristics of these languages.
I think a big part of the success of Haskell is down to its language extensions. New features are introduced as off by default and can be opted into. This allows all kinds of crazy features to be introduced without really impacting users if they don't want to. It does allow the community to be quite experimental without fear of destroying things.
Haskell also has the motto "Avoid 'success at all costs'."[1] What that means is not that they want to fail at the things they set out to do, but that they want to ensure the language is never in a position where it's so important that certain behaviors or code be kept exactly the same because there's too much code that depends on it in the wild that they can't experiment with some new interesting feature in the next release. It is fundamentally an experimental language; while it is used for certain production applications, there's a sense in which the Haskell community+language simply can not ever become a top-tier language, by the community's design. If anything like Haskell ever does get into top-tier status, it'll be something that claims Haskell as a parent, not Haskell itself.
The "motto" (if it can be called that) is not "avoid success", it's "avoid 'success at all costs'" which makes the sentiment clearer: increasing adoption should never be a priority over principled design.
they want to ensure the language is never in a position where it's so important that certain behaviors or code be kept exactly the same because there's too much code that depends on it in the wild that they can't experiment with some new interesting feature in the next release.
Indeed. It's like a feature democracy where each library gets a vote on which features it finds most useful. Those that then get deeply embedded are clearly those that are most useful. Those features that aren't particularly used don't really get anywhere.
I sort of agree with that, but features often have externalities. For example, let's say I choose to use lambda case because it makes some of my code a little bit more concise. From my narrow point of view, that seems like a win. But then it's one more piece of syntax that external tools have to deal with, one more barrier to anyone trying to develop an alternative to GHC, one little piece of additional complexity to throw off Haskell newbies who want to read my code, etc. etc.
True. Those are all good points. I do still feel the upsides more than make up for it, but yes I am glad I'm not responsible for developing any external tools for Haskell!
The top 10 contributors are active and fairly well known in the community (I recognize 5 of the names at least), of 170 members with commit bits for the project.
I would certainly like a compiler that was fast enough to be usable with realistically-sized codebases. I'm writing this while waiting a couple of minutes for ~10,000 lines of Haskell to compile.
Ironically the community designed language is so complicated that programs can only be mantained by small teams of original authors, while the tightly controlled language is consistent enough that large groups can collaborate on one program.
SPJ and a fe wothers work at microsoft, but they are few compared to the community. Likewise, several high profile haskellers were at Standard Chartered, but I don't believe this was significant.
I would say the main drivers of Haskell these days are academics, PhDs and consultancies.
SPJ is a Microsoft Research Cambridge employee, but as far as I know that doesn't give Microsoft Research (let alone Microsoft itself) any undue influence over the language's direction, they can't gatekeep or ram things into Haskell or GHC.
It would still be corporate sponsorship and not a true community effort though. Not having to work to pay bills makes it much easier to run a large project.
Pretty sure SPJ does have to work to pay bills. It's just that they work for a research institution and can thus do part of their work in / on the project. In no small part because CS research was one of the use cases for creating Haskell.
And Haskell's market success is still very limited. This isn't a very compelling argument for community-driven language design. I think a better one would be Rust (to the extent that we can agree that Rust is designed by community) which seems to be getting a fair amount of market penetration given its age.
Haskell was designed by a committee of researchers from various universities, but the first(?) working compiler came out of a University of Glasgow project.
I've only followed the modules controversy tangentially, but I did go to a presentation by Sam Boyer, where he started getting emotional and muttering about organizing some sort of resistance movement. Russ Cox made it a point for Go's regex library to have bounded worst-case performance at the cost of worse average time, and Go's fast compilation times are a source of pride for the core team. That they would frown on a dependency management system like dep that is based on a NP-complete algorithm doesn't seem to have struck Sam as a total deal-breaker. In this respect I am fully behind Cox. As the OP says, the Go team has taste, and I want them to keep the language simple, sane and manageable, not a monstrosity like C++, Java and now sadly Python as well.
This could be done, of course. But how likely is such an approach to succeed? It would effectively create a new language and in turn to a new ecosystem. Why not just use a different language (Rust comes to mind) then?
First of all, Rust is quite a different language to Go, so I wouldn't consider it a drop-in replacement. But this also means, that those, who don't like too many features of Go might really be happier with Rust.
But adding features to Go by popular demand would make it a new language and a new ecosystem. So if that is what people want, they should go ahead with it. Even if it just is an engineering prototype, which showcases features that Google should add to the Go main line.
Because after his experience being forced to drop into BCPL from Simula, Bjarne sweared not to do it ever again.
So when tasked to write his distributed network application in C at Bell Labs, his first step was to build something that would bring his Simula back, instead of bare bones C.
Sure, but that's a different issue. If it doesn't gain traction then the decisions made by Google are clearly considered the best approach, at which point what relevance does the openess have?
> If it doesn't gain traction then the decisions made by Google are clearly considered the best approach
I don't think this is necessarily true. There are a lot of fuzzy factors that go in to making a language successful and it's tricky at best to get these right, even if openGo was a better language.
As a practical example, go effectively has one person who's full time job is keeping the testing infra happy. It'd take time and money to establish this in a go fork and going without it would make development of the fork much harder.
> It'd take time and money to [develop a language]
Isn't that part of the fundamental conundrum though? If your need is so great, and so common, and so beneficial to get solved then the cost of development should be less than the benefits accrued, and smart business would bring that money and time to bear on the challenge and solve it. As Google has done with the main project...
Saying there isn't time enough or money enough to solve the technical challenges sounds like saying those challenges aren't important, or valuable, enough to be prioritized. Which is to say our wallets are in agreement with Googles wallet on the issue of generics in Go.
The considered 'best approach' from the community isn't the best technical approach to the technical issues, it's the best balance of pragmatic solutions for produced benefit (ie ROI), that create a sustainable project/product.
A major thesis of the article is that even if something was successful outside of the core team, they would ignore it in favour of their own ideas. Go modules is the reference case.
Go modules is exactly the reference case, but let me expand on that a smidge, because that was when this become crystal-clear to me.
There's one exchange in a blog post linked from the article[1] about dep/modules that I think is illustrative of the entire issue (double >> are quotes in the article from Google/go, single > are commentary from the linked blog author):
>>Although it was a successful experiment, Dep is not the right approach for the next decade of Go development. It has many very serious problems. A few:
>>Dep does not support using multiple major versions of a program in a single build. This alone is a complete showstopper. Go is meant for large-scale work, and in a large-scale program different parts will inevitably need different versions of some isolated dependency.
>Russ [a member of the Go team] has asserted this from day one, and has brought several examples out in evidence, but has simply not convinced the committee that it’s true. Calling it a serious problem, let alone a showstopper, is a significant overstatement. The strongest claim that can be made on this point is that it’s a matter of opinion.
That, to me, is that. Go is Google's language, and Google said that for them, not supporting multiple versions of a dependency was a showstopper. The community read that and saw it as a point for debate, and the author continues to try to debate it in the article.
And that's the issue! It was not a point for debate. Google was being forthright. Google was saying "from day one" it was a literal showstopper, and the community seems to have read it as a figurative showstopper. Who was right in this instance is irrelevant; if the community wants to litigate Google's decisions rather than integrate them into their tools/patches/etc., then the community will not get those things adopted into go.
> Google was saying "from day one" it was a literal showstopper,
For a long time people on the C++ standards committee insisted that we need trigraphs because it had to support systems that didn't even have ASCII. We still don't have pragma once as a standard replacement for include guards because other people seem to compile on some crazy network typologies where it cannot reliably identify files. Taking every "literal showstopper" serious without questioning its merits gets you stuck with C++98 and a lot of quickly accumulating legacy cruft.
> Dep does not support using multiple major versions of a program in a single build. This alone is a complete showstopper. Go is meant for large-scale work, and in a large-scale program different parts will inevitably need different versions of some isolated dependency.
This has zilch to do with not being community led, so perhaps the complainers should fish for better arguments. Rust makes the exact same call wrt. this particular issue, and it's very much a community-led language, with a public RFC process.
Rust does support multiple major versions of dependencies in a build.
The only thing we don’t allow is multiple copies of dependencies that link to native libraries, and the -sys pattern means that this is rarely an issue in practice.
Yes, but the folks who are now complaining about Go not being 'community-led' enough were pushing for a module system ("dep") that does not allow for this, and being told that not allowing multiple major versions in the same build was indeed a major problem (and even a showstopper) with their approach. I'm just pointing out that this is clearly a bad argument for calling Go "not community led"! Sorry if that was unclear.
Reminds of Java modularity debates, where OSGI folks kept railing against approach Core Java people in Oracle were taking. In Java's case it took really long time for Oracle to prevail over self-appointed community leaders and modularity experts pushing OSGI and come up with better solution.
I am happy that similar kind of situation did not prevail for that long in Go's case.
IIRC, one of the major differences between OSGi and the Java modules work (Jigsaw?) was that OSGi had major.minor.micro.stringy version numbers and Java had to have giant.major.minor.micro.stringy because Java's versions would always start with "1.".
And the final Java module system is much less flexible than OSGi (which is much more than a module system, which in turn is both a strength and a weakness in this case). Or do Java modules support multiple versions of the same package?
What has happened in the past is that features developed in a fork make their way back to the mainline - iirc in the Java world this was a thing with the virtual machine, particularly garbage collection. If a hypothetical OpenGo can solve the generics problem in a satisfactory way then it could make its way back into Go. Well, licensing and such notwithstanding.
Why not? I consider Rust to be superior in almost all respects apart from learnability and compile times. So if these two downsides are not more relevant than the downsides of Go (in a certain context), I don't see any reason why Go could not be replaced by Rust.
It's not about learnability, compile times or anything like that - Go has a high-performance concurrent GC and Rust does not. There are whole domains, problems and the like that are basically unapproachable in Rust unless you're up for reimplementing half of a LISP system beforehand. (And people do - that's what the ECS pattern often boils down to!)
Ah, the Rust fanboys who vote down comments of anyone who criticizes Rust. They are actively trolling forums and spoil Rust for the rest of us. (IMHO, similar behaviour played a big role in the relative lack of success of CommonLisp, which is another language that I really like.)
So I know both languages, have chosen Go instead of Rust for a larger programming project recently, and have 30+ years of programming experience, so I feel somewhat qualified to answer the question you ask---if you meant it seriously, which I doubt.
People use Go for its simplicity, good tooling, good backwards compatibility, fast and modern automatic garbage collection, extensive libraries, and fast compilation speed.
Yes, Rust is hard to learn, puts a constant high cognitive load on its users even once they have learned it, is relatively fast moving - meaning code you write now will likely not be idiomatic in a few years from now -, and has slow compilation speed. It has many other strengths, as you rightly point out, but most of them will not be a reason for someone who uses Go to switch to Rust. For example, most people who use Go do not need or want to quench the last performance out of their CPU and are less obsessed with zero-cost abstractions. Many C++ programmers, on the other hand, might appreciate these features of Rust.
Rust and Go are simply not languages that compete with each other. Go is a competitor to Python, VisualBasic/Xojo, and various server-side scripting languages like Ruby and PHP. Rust is a competitor to C and some uses of C++, and maybe to languages like Ada and Haskell in some safety-relevant domains that do not require a formal language specification.
The OP could have just as well suggested to use Ada instead of Go. It is possible to write Ada like Pascal, making it almost as easy to use as Go, but the suggestion still doesn't make much sense.
I'm not a Rust "fanboy". I'm not even a Rust developer, but I know the language somewhat. I've had my time with Go, but wasn't satisfied with it. Some things I didn't like in Go are better solved in Rust (generics and error handling for example).
But thanks for replying seriously, even if you doubted that I wasn't only trolling (what was not my intent).
The post makes it sound as if Go being Google's language (not the community's) is a bad thing. I don't see where this sentiment is coming from as the strong opinions enforced by Go core devs is probably one of the defining features of Go.
As with many open source projects under benevolent dictatorship, this can result in streamlined and consistent features with long-term success.
There is an assumption here that the community is unified, you are part of the community, and therefore if the community doesn't get what it wants, then you don't get what you want.
Actually, the community disagrees on stuff, so you're unlikely to get what you want, regardless. The only way you always get what you want is if you write your own language. (Assuming you're skilled enough to implement it.) But a language nobody else uses is of little benefit.
There needs to be some core team that makes coherent decisions. Unless the community is tiny, the core team is not the community. There are inherently tensions. However it's unclear that Go's core team is doing worse at listening to their community than other language's core teams?
100%. Look at Ruby's community. It's a pretty great community in my experience, but at the end of the day Matz, Kokubun, etc. act as gatekeepers.
Democratic creation of software would be a disaster. There's so many philosophies of development and differing opinions that you wouldn't be able to make progress. At the end of the day the reason a language exists is to implement the vision of those who created it. The community is a labor force to implement, test, and verify the decisions made by the heads.
I totally encourage people to write their own languages though. A language no one uses can be beneficial to computing at large as experiments in language implementation. We all know silly languages like Brainfuck that you would probably never use for work, but can be useful learning tools.
It can be forked it if the opinions are widely shared and you think people will support you to run it better ? The tone of the article seemed to be casting shade relative to the institution funding it's development as the foundation of why there are problems ? It's normal in a large scale open lanuage there will be disagreement around direction of the project.
Many models around how those disagreements are resolved exist across many projects. You can choose one of them or build your own project and make your own decisions :)
I’m convinced that the people who just suggest forking an open source project with 100s or 1000s of man years behind it lack practical experience in building or maintaining large software systems.
Dismissing valid criticism with “just fork it if you don’t agree” has got to be some of the most useless advice parroted around open source.
To their credit, the Go team withdrew most of Error Values draft 2 [1] after a great deal of negative feedback. However, they've kept one unpopular change to fmt.Errorf().
We can hope that draft 2 of Error Handling [2] won't look anything like draft 1, for reasons such as [3].
> The most contentious point of the original design was the special-case handling of a trailing ": %v", ": %s", and ": %w" in the format string, which did not follow the usual printf model in which the meaning of % verbs is context-independent and all non-% text has no meaning at all. We will remove those special cases from Go 1.13.
At least, they removed the really ugly special cases!
> Python has always been Guido van Rossum's language regardless of who he worked for at the time.
(from the article)
I think the high-level point that Python has not been tied to a company is true, but it's not been true that Python is "Guido's language" for decades. Python has a very effective community process. Guido has certainly served as a tie-breaker and of course as BDFL (no longer!), but I see the language being largely steered by the community and community leadership.
I believe that solid (benevolent) dictatorship is better(faster, can make unpopular decisions, etc) than democracy.
I don't like any language, but tbh I hate go the least, and I believe this is because few very very dedicated people with enormous amount of experience can simply make unpopular decisions.
But Go is one in a long line of proprietary languages that those of us who have been around the block know to stay away from. Recently: Java was Sun's, C# is Microsoft's, Swift is Apple's, Go is Google's. With any luck, all will be footnotes in ten years. Those of us who knew better than to get invested in them will be fine. Everyone else gets a chance to learn something.
Each of the languages you mentioned is wildly popular, well run, and not going anywhere. If you chose not to learn Java a decade ago because you "knew better" you aren't any better off than you were before, given that this language is the core foundation many companies and open source projects, maintaining relevance for 20 years. Not learning Swift now means you won't be able to do effective IOS development. Not learning Go means you're missing out on the language that a lot of important systems software is built in such as Kubernetes and Docker and won't be able to contribute to them.
I think also it's inaccurate to call any of these languages proprietary. They are all open source and if you wish you can fork them and make your own. C# in particular is now run by the dotnet foundation which is nonprofit with elected board seats, putting it in a better place for governance than many community driven languages.
Whether Kubernetes is important for you, and whether Go matters as a language for implementing it is very much the question. Kubernetes was started in Java, and Kubernetes developers themselves describe it as incoherent mess of dozens of daemon programs in various stages of completion in the process of refactoring Java idioms into Go while introducing redundancies due to Go's limited code organization facilities.
You can fork them and become (or remain) irrelevant.
Ios development is a walled garden. C# is barely used except to target Windos. Go? Too early to say. If it did fade away, who would miss it, really? Its express design purpose was to be not powerful enough to make big mistakes in. Has it transcended that? If so, what is its purpose now?
Java has shown staying power, despite its shortcomings, although its connection to the Apache Foundation ("where projects go to die") offers just a ray of hope.
Unity is behind "Pretty much half of all games " on all platforms [1] I suppose it's possible there have been 100K Unity games written if you include abandoned efforts, education etc. No idea of mean LOC but that multiplies out to a lot.
Also Xamarin which if I read this [2] correctly owns a third of the mobile app market. I don't know whether that's credible but apps are a huge market.
It is possibly that could get you the 10 bn. It is quite certain that there has been more than 10 bn lines written on.NET Framework, probably far more, as for 15+ years it has been the dominant language for corporate apps built on the Microsoft stack.
On top of that.NET Core is growing much faster over time,according to their blogs, and it's cross platform as you know.
Swift has garnered little traction outside iOS development, while other languages are gaining traction inside iOS development (javascript mostly, but Kotlin is also trying to make headway into iOS development).
I won't put survival rating for it to be very big. I'm sure Apple would keep it for decades to come, but I'm wouldn't bet on it being anywhere near as popular as it is now in the future.
As long those languages aren't native, they won't be nearly as popular as swift. Swift doesn't need to be adopted outside of iOS development at all, because when people develop iOS apps the main choice will be Swift.
Swift is actually a very pleasant language, and apple provides some pretty good documentation although it could be better.
It's going to play out like this: Swift or Objective-C (for legacy codebases/people who already experts in objective-c and don't want to switch), and some other, slower languages that will always be second class.
> Swift doesn't need to be adopted outside of iOS development at all, because when people develop iOS apps the main choice will be Swift.
That would mean its fate would be tied to iOS, which is used only on phones built by a single company. When that company stopped producing mobile phones, or decided to use a different operating system, the language would die.
A lot of the tools in the modern ops ecosystem are written in Go, and that trend is not slowing down. Not even going to start to list them. A lot of devs also love Go and it has a very nice community. If you would have said this 5 years ago? Sure - but now it's past the 10y mark in age - I think it's safe to say Go isn't going to go away any time soon.
C# (and other .NET friends) aren't dying for a while, there's a large base of things written for multiple platforms on it, even though I think Microsoft isn't writing bunches of their core OS components in it.
Java isn't dying for a while either - anyone who still needs to support applets aside, or corporate applications written when Java was the fashionable thing to ship apps in, Android's install base will mean it's relevant for a fair stretch yet, even if OpenJDK fell into the sun tomorrow.
Swift I don't have much insight into, as I haven't done much with OSX or iOS in a while, and I have indeed not seen much uptake outside of those.
Which languages are you suggesting were/are good targets?
Python is technically not one company driving it but a bunch of developers on it are employed by large companies to work on it.
JavaScript is seeing a bunch of use in a great many places, but originated with one company's implementation and development.
Rust is pretty obviously one company's child, even though it is seeing decent uptake from other users. No predictions on whether it would survive said company dropping their work on it, though.
Experience can differ between companies and languages. I have been involved over the past year with Julia which is mainly ran and developed by the company Julia Computing. One might say that it is the equivalent operating scheme as the companies you mentioned. However, my experience with the community has been vastly different. If you go over the development issues list it is extremely satisfying to see how many are raised by the community and adopted into the language. In contrast to the omnipotent response of Google with the Go community. Having such differences implies that a company ran open source language can indeed be influenced by the community.
Having said that, just because it exists in Julia does not mean it can exist in Go. I just wanted to mention that there are exceptions to the rule which begs the question if it is a rule in the first place.
I agree completely, but good luck convincing people around here; I’ve had no luck for a few years now¹²³. It may be an age/experience thing – you might need to have been burned by it a few times before you learn not to build your house upon the sand.
Core Java systems are being written and updated at: Oracle, Amazon, Google, Netflix, IBM and almost all the Fortune 500 companies.
Java is also used by Facebook, Microsoft, Salesforce, Apple, and many other companies that aren't necessarily known for their Java development.
Java usually is the base for every core banking system that's not old enough to have been written in COBOL (and even those are being migrated to Java in many cases) or hasn't been written from scratch in .NET.
The same Java is also used by governments throughout the world.
If Oracle vanishes completely tomorrow, there's huge incentives for a community initiative to completely take it over ASAP. There's probably tens of billions of dollars in existing code bases that have to be maintained and extended.
Ten years is a very short time for a programming language. Python is decades old and hasn't hit its peak popularity yet. I don't know of a single mainstream language that has died, except for perhaps ColdFusion or ActionScript.
Depends on how you define death. There's ALGOL, SNOGOL, LOGO, Pascal, Visicalc, APL... They were all mainstream at a point, and they all still exist, but the userbase has become tiny.
Look at what happened to Kubernetes. They rewrote original Java code in Go and it's a mess because of Go's limited abstraction capabilities. Sure there's an element of rewriting in a new language and attempts to force idioms, but there's also the fact that Go literally hacks in special cases for generics, unavailable to users. They recognize the need for generics, but haven't implemented them, which is a problem for a complex project where abstraction might be useful.
Richer type system, richer concurrency, richer tooling, greatly advanced and advancing runtime, interoperability with multiple languages on the JVM, a naming system that doesn't cause frustrating workarounds a la the Kubernetes ObjectMeta / TypeMeta / "v1alpha1", "v1beta2", "v1" nonsense ...
Go is the language that taught me to really appreciate writing things in Java.
10 years is a long time in any business, and Java has been around more than 20. It is pervasive in enterpriseland; if nobody wrote a single line of greenfield Java code it will be around for decades. The COBOL of the modern era!
So not just all the big corporates but plenty of tech companies like Facebook, Uber, Spotify, Twitter, Linkedin, Netflix, Apple, Google etc all rely heavily on it.
Maybe they do now, in this decadent era of Lite beer, hand calculators and "user-friendly" software but back in the Good Old Days, when the term "software" sounded funny and Real Computers were made out of drums and vacuum tubes, Real Programmers wrote in machine code. Not Fortran. Not RATFOR. Not, even, assembly language. Machine Code. Raw, unadorned, inscrutable hexadecimal numbers. Directly.
I’m guessing you don’t use C or C++ either then because they were proprietary Bell Labs languages. You should probably rule out assembly too because they will have proprietary instructions for proprietary CPUs. Which means if you really want to be freed from the shackles of using proprietary tools when programming you’re now building your own hardware too, processors and all. Good job ASCII isn’t proprietary otherwise your computing device would have no compatibility with modern computers at all.
I know I get a little absurd towards the end but no more so than your remarks about Java.
Guess away, but both C and C++ broke out of Bell control very, very quickly by language standards.
Now they are driven by ISO Standards bodies, with many, many participants -- ATT not among them anymore, to my knowledge. Even in 1995 ATT had nothing approaching veto power.
My remarks lumping Java in with C# and Swift are transparent wishful thinking.
That was 10 years after it’s initial release and the spec hadn’t much changed from the specification defined by those Bell Labs employees. It wasn’t until 2011 that C++ saw some significant changes through the community. And in the first 10 years I seem to recall it was plagued by proprietary compilers having their own subtle behaviours.
I’m not saying it was all terrible nor that Go is managed better. But whenever there is a conversation about Go on HN people get so caught up in their own snobbery about how terrible they perceive Go to be that they lose all touch with reality.
The fact remains a language backed by a company is far less likely to die into insignificance than a language that isn’t. This is because it takes a lot for a language to gain momentum. You have a bit of a chicken and egg problem where developers won’t use a language without a good ecosystem, frameworks and community. But people aren’t going to write that if there aren’t already developers using it. This is where corporate sponsorship really helps.
Thankfully there are a plethora of good languages out there you can choose from. If you don’t like the direction of one language then you can use another. Or, alternatively, since Java, C# and Go all have open source compilers, you could fork and build your own community.
For C and C++, you can get effectively equivalent working drafts for free. For C++, you can actually compile the PDF that gets sent to ISO to put the official markings on yourself. Here is C++17: https://github.com/cplusplus/draft/tree/c++17 .
If you have a contractual obligation that requires to exactly use a specific version of C/C++, then you'll need to pay money for the actual specification. In pretty much any other situation, the drafts are sufficient and perhaps even better (because they will have incorporated some errata).
I'm sure you were perfectly aware of it, but C and C++ working drafts are almost identical (I think? Never bought an ISO version), and available for free from open-std.org. For example, http://www.open-std.org/jtc1/sc22/wg21/docs/standards
I like Go's concurrency approach and its concurrent GC implementation, but I personally don't like the language that much. It would be awesome if the runtime system could be factored out, and targeted by other languages.
I really wish Go had learned a lesson from Java: programmers will eventually want generics, and adding generics to a language that was not designed for them leads to new and unexpected obstacles.
The lack of generics makes Go uniquely unsuitable for functional programming, an unfortunate outcome when functional programming is the New Cool Thing.
1. Compared to most compiled languages the toolchain is very easy.
2. Reduced ways of solving things in the language itself.
My main dislike of C and C++ is that there are numerous ways of solutions for the same problems, this forces me to weigh the solutions constantly against things like "Is this a safe way, does it create fast code, will this way fit when my project is further or must I constantly refactor my code".
The extreme flexibilty of C and C++ are more a burden for me than an asset.
I personally also dislake C++'s standard library, lack of standard functionality in it and the difficulty of maintaining dependencies and the whole resulting build process with crosscompilers. GNU Make was not enough, CMake was not enough, GN is almost rocket science, then there is Meson. And choice of clang or gcc. Chrome/Fuchsia is an example of how complicated it can be. But does it have to be like that? There are definitely good reasons to use C++ or C in some cases (system, low-level, games etc) but C++ is a difficult language and it's not necessary for lot more other cases. Rust is yet another story, solving some problems but making new ones.
Go made me enjoy the programming again. And of course, managing modules is a very difficult task so it had to be done somehow and somehow really, really good and I think they delivered the working solution. Various previous attempts were bad for various reasons (Kubernetes can tell you).
There are many answers for why this won't happen, but one that does not usually get said out loud is that Go is Google's language, not the community's.
More precisely, its design belongs to a small number of capable people who are on the same page, with regards to a pragmatic, minimal-ish design. This is better than a "benevolent dictator," in that there are some checks and balances. It's also better than design by mass committee from the public.
No shit Google has the final say, they're the maintainers. They pay developers millions of dollars to make the final call on what's best for the language. Even if someone were to start a community fork there would still have to be a central board of governance.
Suggesting that we should split the community for a feature that you like in other languages is a dumb and lazy argument.
It is actually a good thing. The more forks there are and more complicated ideas and toys, the more mess and chaos. It can eventually kill the whole ecosystem.
I love open-source and libre software but there needs to be some central authority who oversees the changes and is forward-thinking. I actually love Go in the way it is and it wouldn't be it if you kept asking community what features to integrate. I am looking forward to generics as it solves some cases (containers holding items of a templated type sharing some common methods). But even if Go stayed the way it is, I would be fine. The point of open and libre in this project for me is that Google can't shut it down and that's enough for me in the case of prog language.
Please, if you want to make super-crazy-new community-driven uber-language which will change the world, make yet another and truly yours language or join Rust or whatever more community-driven language you like. Thank you! :)
The problem I have with the article is that, as an outsider, it reads like this is not about Google. It reads like this is about features that the author liked which were not merged.
In an alternative timeline where Go was not sponsored by Google, deciding what to merge into it and what to leave out would fall into some sort of "Core Team". In that timeline, if that Core Team decided to not include the features that the article author is describing, then we would have an article titled: "Go is The Core Team's language, not the community's".
I can only offer one answer. Maintaining an open source project is a lot of work. If you care enough about those changes, make a fork, include those changes, and maintain it yourself. If that's too much work, then you are stuck with what others able to dedicate the effort decide.
This is a difficult one. On one side we have a massive corp that is pushing it's own language and essentially steering it the way it wants.On the other, there's ever growing community that has its own ideas and suggestions on how the language should evolve. I'm too early in my journey woth Go to be able to have in-depth discussions on its functionality, however what I don't want to see it turning into another JavaScript,which is such a mess and free ride for everyone who thinks they have another good idea how to fix it.Unless there's a proper independent steering committee,I can't see this working too well without Google's ownership.
Yeah, that’s the case with any coherently designed language or software package. I don’t really have a problem with it. Now that they’ve come up with a workable module system, the issues of it coming from Googleworld are minimal. I don’t personally care about generics, and I like the error system so I find Go to be a great language to work in.
But that’s all just my personal opinions. What I found interesting was the callout of OpenJDK as a community-run language, but is it? I feel like OpenJDK is the worst of both worlds. Too much corporate overhang, and all the worst aspects of design by committee. But YMMV.
Oooph, from my perspective, this is the very same thing with clojure, and the single thing that drove me away from the language. Substitute spec/schema for vgo/dep, and you're telling the same story.
I value individuals and companies sharing their developments with a community. One has to remember though, to own something is different from benefitting from other's work, and if you want to have a say, you'll need to get your hands dirty, fork, and work.
Nice to read all the comments on this story. The story is about the how Go is not truly open source and community driven and the flame war starts on how Java does not implement generics correctly etc.
Guys: each language has it's benefits and it's shortcomings.
If we want (need?) more control over the language, can't we just fork it? Start a new language that is based on the current implementation of Go?
C# unfortunately has the same issue, but the effect is much more visible. Few people (I'm sure you can find examples, but given how popular the language is on Windows, it's a tiny minority) use C# outside of Windows and it's a shame. There is no good open ecosystem so even as a former Windows dev I hardly use it anymore.
The limiting factor for me, as a long-time C# dev now working outside of Windows, is that many of the tools I used haven't been ported to .NET Core.
Take Umbraco, for example. If someone were to port it to .NET Core and rewrite parts of it to use Postgres as a db option there would be zero reason to ever use WordPress again.
You're right, although I'm not sure if people necessarily go to their ISP or to the cheapest host possible.
Most of the time, in my experience at least, someone that wants cheap hosting and is using something like .NET or Python will Google "cheap python hosting" and see what is cheapest/recommended.
PHP tends to be the outlier, because it's absolutely everywhere, but I think the web has matured to a point where people will look for specific hosting for their choice of tech. Hell, back in my freelancing days when I used to rebuild broken WP builds, most people that weren't given hosting by their client chose it from looking up "cheap wordpress hosts".
When time came to replace my XML/XSLT based website by something else, just to get up with the times, I resisted to touch PHP, but in the end having it at my ISP versus the trouble of using something else won.
Nowadays I am able to use PHP 7.x, so I just keep using it there, and suggesting it for the less tech savy friends that want some kind of dynamic website.
Because while I do build sites in Java and .NET, I do accept that they aren´t that easy to set up at most ISPs, and cloud based one click solutions tend to be more expensive.
Out of interest, what do you use instead? I have been trying to stick with C++ because it seems the one language that isn't too badly dragged into platform-specific fan clubs.
Depends on the purpose. I write mainly Python at home and at work, sometimes create small to medium sized Bash scripts, use PHP for my personal websites and most web projects, sometimes a few lines of Javascript (command line, I mean, not as part of html/css/js), sometimes some Java at work, LaTeX macros for documents... If speed is paramount (rarely) I'll also write C/C++ or maybe Go, but those are not my forte.
So it really depends on the purpose of the project, who else works on it, what it needs to integrate with, etc.
Thinking about what I use, C# didn't even come to mind. I only used it when required in school or when interning in companies that run Windows. I've written some C# on Linux at home but it's not much more than my experiments with Brainfuck were...
This is absolutely true. In general I feel like it's good when a language belongs to a core team, but they don't appear to be learning from past mistakes. The core team put out a proposal for modules and then immediately implemented it with very little community feedback except what could be done after the fact without breaking much. There was huge backlash and they claimed they would do better and seek community feedback earlier, but now they require that you use a Google web service when fetching modules (I've turned it off) and shoved that out between two versions without any community feedback as well.
Strong central leadership is great, but leadership needs to actually listen to the people they're leading (and not just as an after thought).
> In practice we'll only get a chance to find out who Go really belongs to if Go core team members start leaving Google and try to remain active in determining Go's direction. If that works, especially if the majority of them no longer work for Google, then Go probably is their language, not Google's, in the same way that Python has always been Guido van Rossum's language regardless of who he worked for at the time.
This is laughable. Anything built within Google belongs to Google. There's no way the company will let anyone from the core team leave and take the language with them. Keep in mind, Google's version of Go and the community might be vastly different, as the former has different needs versus the latter.
A small group of programming language professionals with fifty+ years of shared experience developing compilers decided to keep on going with the language family they invented and extended.
I'd say they have fifty+ years of implementing the same language over and over. During the fifty+ years they studiously avoided learning what anyone else had been doing, and it shows.
People don't care who owns Go. They just write awesome software in Go such as Docker, Kubernetes, Prometheus, Grafana and VictoriaMetrics [1]. Go authors created simple, clear and productive programming language. Community-driven design for programming language may be disaster - look at incomprehensible C++ Frankenstein.
this is exactly one of the reasons i learned Go in the first place. without the backing of a major corp it’s damn hard to have major success. not impossible, or unheard of, just hard.
I am about to learn a new programming language and I decided against Go just because of this fact. I do not trust Google and reading this article just makes clear how critical the state of the language is in terms of control by the community.
Python looks most promising and I already worked with it, but I am not sure yet. Can anyone recommend a viable alternative for Go? Any web-focused language that is performant, modern andalready well used?
You should use a language based on how well it performs for your problem and domain, and the community around it. Not based on one article or because google is maintaining it.
No, but it's very likely that they'll make decisions that'll alienate the community and thereby cause it's ecosystem to lose more and more relevance. It has done that with other open source projects they stewarded.
It depends on what you mean by "the community". If it's a community of contributors, and Google pulls it into a direction the contributors don't want, then they can fork it and continue contributing to it. If it's a community of users, then they have no choice but to follow whatever the contributors decide. I agree you need to have major contributors on board with a fork.
There's more nuance to it though. Users eventually become contributors (at least some percentage are), they become and stay contributors when they feel heard and feel like they have the ability to influence development. That's what nurturing an open source community means. If you start alienating your non-core contributors they'll stop contributing, if you nurture and support your non-core contributors they might become core contributors. No body wants to work voluntarily on a project they can't influence - that's not a contributor, that's an employee.
It seems you misunderstood my post, like some others in this thread. I did not decide against Go because of the linked article, but rather because I have the same view on the language that is outlined in the article. I have a big problem with Google. And the fact that Google practically own Go is a red flag for me.
I'll suggest elixir. It's fun to build in and the community is growing (at least two companies with very large scaling demands use it). Don't get hung up about performance benchmarks, they're biased towards numerical algorithms; in practice if you are web focused your tasks will be IO bound and things like uptime process restart semantics and robust concurrency are more important.
What you learn from dedicating time to a new language, should not be the ability to program in it in 10 years, but to solve a problem you have now and hopefully learn some new concepts. Pick what will keep you engaged - it's the "learning something" that matters if you're not explicitly trying to solve something.
Are you saying that there aren't garbage collected languages suitable for network development ("moving bits from place to place") with a substantial community except Go?
I mean, Java and all the JVM based languages comes to mind. Python, Ruby, Node.js (assuming performance isn't at the top of your list, which for many it isn't).
Go has a small community compared to all of the other popular languages.
Both have helped myself and many other people write better code in "enterprise" languages. I don't mind my team using different languages for small bits and pieces that are one off things, or can be replaced without must effort. It keeps developers happy to have some autonomy. I only demand that it's well documented how to build it.
I'd say it depends on what you want to do with it. Go is, as far as I understand, mostly a systems programming language. A replacement for C, basically. That means it competes mostly with Rust I guess. (I'm not familiar with Go or Rust, though.)
Python is mostly an application programming language. It competes with Java, Ruby, C# and those kind of languages. Python also has tons of excellent libraries for a wide variety of specific domain areas, like Machine Learning (perhaps most famously at the moment), but also many others.
If you want specifically web-focused, Javascript or Typescript are the obvious places to go. Nothing is more web-focused than those two.
That sounds logical, and I'm no expert on either Go or systems programming, but I've often heard of Go being referred to as a systems programming language, and on their own blog[0] they list it in third place at 37% as popular use of the language. Well after web development, admittedly, so apparently it is more a web development language than a systems programming language. I was clearly wrong on that part.
Mesa/Cedar, Modula-2+, Modula-3, Oberon, Oberon-2, Active Oberon, Component Pascal, D, Swift, Sing#, System C#, Common Lisp, Interlisp-D, StarLisp, Real Time Java beg to differ.
I think Microsoft or Apple for example, made their own programming languages in much more ethical way than Google's sceptical which is: "ohh we are going to make our own language and get the help from the open source community." Everyone knows that companies are "exploiting" open source today which is really sad.
> Everyone knows that companies are "exploiting" open source today which is really sad.
Often enough it seems like companies go out of their way to deal with open source just for the "cred" and hiring opportunities that it brings. The open source itself is a just a drag on their internal team, which has to deal with tickets and contributions they don't want or need.
They don't need what? Free testing and debugging from thousands of people running their software on completely different hardware and OS? You may see useless tickets but companies are paying millions for QA and testing.
My observation has been that the free testing they get is often for cases they never hit in their production use cases.
For example, FB doesn’t use DraftJS on mobile, so it took a whole year long effort and a person who used to work for FB to fix mobile support. They just didn’t need it, didn’t have resources allocated to dealing wit it, and were understandably unresponsive on the subject. Now maybe they can use it in the future on mobile if they want, but they would have just fixed it if that need had ever arisen for them organically.
There is a significant difference: Linus is a real person with an actual personality which you can get to know and reasonably choose to trust. “Google”, as an entity, not so much. Even we grant Google personhood, they’ve shown themselves, shall we say, less than dependable in regards to long-term support of their offerings.
Can the same sentiment also be shared for Swift/Apple? Or is Swift organized in a way that doesn't have these issues?
Maybe this is just an inherent problem for all company focused languages/frameworks (react, golang, kotlin, etc), and we need a good example of how to make it work for everyone
I wonder if the same thing that happened to Android OS, will happen to Golang. Starts off as opensource and free and slowly tie the users down to Google as they did with App Services, etc. "Look but don't touch" kind of opensource?
So proprietary is bad? I have no issues with things being owned by a single entity, especially when they're financially backed by that (massive) entity.
I even think that for language and base class library design a strong ownership is key to success. They make the language consistent and the library easy to learn.
Exactly. I want my language to be crazy good and have an amazing library to tap into. In fact, it was Go's great standard library (JSON parsing, hardened prod' ready HTTPS server, SSH, crypto, etc) that formed a large part of why I love the language.
And yeah, I'd rather go with .NET/C# over PHP any day (although I understand PHP is getting better and better, which is good to hear.)
Well PHP and .NET do not play in the same league. But it is about consistent language and library design. Great failure in PHP and great success in .NET/C#.
I developed PHP till 2007, then as a hobby project in 2013. Massive improvement. And I heard very good things about performance the last years. Swoole even can compete with the top tear languages
I like Go a lot, but the fact that Google doesn't use it extensively in their own environment makes me wonder why that is. It almost feels like a Trojan horse.
You want Go to be designed by the Internet equivalent of a cooperative? Isn't that how you get stuff like C++?
For some things Democracy is excellent (governance of nations, because it might be the least bad option). For other things a dictator is good (Amazon seems to be providing a lot of societal value under Chairman Bezos) . For yet other things perhaps a motivated group of technocrats might be better (Go).
I don't suggest I know the answer definitively. I could be wrong. But Go's results are good. Perhaps it would be fair for me to concede that I should examine this question again in 10 more years. What is good for 5-10 years might show problems in 20-50 years time.
After all, even communist North Korea was a relatively functional place for a time.
Some hard decision was made for the better of the language and its users at the end. Thank you for the courage taking the hard decision.
If want to argue, argue that other package manager do better job than modules instead of community did effort and it goes for nothing but experimental.
Strictly yes, but we can't come close to reading all the comments here. If you think we missed something egregious, the likeliest explanation is that we didn't see it. You're welcome to let us know. In general, though, it's like getting a speeding ticket even if some other guy was going faster and didn't get a ticket.
That's not really a problem for computer science students, I think. Their area of focus tends to be on the algebraic, theoretical side of the spectrum. It's much more of a problem for computer (software) engineers.
Be careful. Rob Pike asserted Googlers can't handle an "advanced language" or whatever his exact words were, but he provided no evidence this was true, or that others at Google agreed with him, or that this was even the reason Go was created to begin with.
The most obvious evidence he's wrong is that Google's codebase is all C++ and Java, both languages that have generics and other more advanced features than Go. So apparently Googlers can use such languages just fine. What Pike was claiming was that Google had at some point dropped their hiring bar dramatically and somehow nobody had noticed this or commented on it, despite how controversial it'd have been inside the company.
I rather think that this comment about Go is not reflective of any real strategy or decision inside Google. Go's creators were bored and wanted to make a new language, so they did. The justification for what it looks like was then retrofitted on top.
> The most obvious evidence he's wrong is that Google's codebase is all C++ and Java, both languages that have generics and other more advanced features than Go.
Alternatively: Seeing people continue to make mistakes in those languages that are widely used has informed the position about language features and design aspects that are problematic, and things that would be helpful to have. Proposing a new language to avoid them is an (admittedly fairly extreme) solution to that.
Occam's Razor: Did he really make up a strange claim that reflects negatively on his employer, or was something very much like what he said actually part of his brief?
I believe it was the latter, the twist being that it was forward looking, not reflecting then-current reality. It was likely someone else's prediction that the quality of talent available to Google will go down, going forward.
> "The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."
Back home CS means Informatics Engineering, a degree certified by the Engineering Order to practice, with enough theory and practice spread across 3 - 5 years.
Those that want CS theory without programing have a major in Mathematics with a minor degree in computing.
Only if you define this in such a way that it's tautologically true. If your definition of software engineering incorporates some degree of economic success, then no, these languages don't work well.
I'm sure someone will be tempted to levee this strawman argument: "But lots of languages drive projects that make more money than Go!" Note that I never claimed market success was the sole criteria.
There’s no evidence that those languages are better at engineering at scale than Go is. In particular, all of them are riddled with anti-features e.g., inheritance and all of them have ridiculously convoluted tooling (especially build tooling a la CMake). If I want to get something done, especially in a team setting, I’ll reach for Go over those languages every time.
I look at it as a collection of any guarantees that you can leverage to ensure a (relatively) high degree of correctness, performance, and maintainability. I can't fault Google for valuing these things over agency to play with languages that feel more fun/interesting.
Although true for programming CS, I have to note here that Google hires people who didn't study programming per se. One guy I know who worked at Google did processor design. They hired him for web programming or something about as unrelated as possible. Who knows how many extra-bright, barely-programming grads they pull in. They need them to learn quickly to be productive as closely to day 1 as possible. Hence, Go.
I think Python was main language in that space before Go. It's not as simple or performant as Go, though. So, this was a major improvement even if I think they could've designed an even better language from our perspective.
Um, Yeah, GOOD! This demand for Generics is pure sheep brain garbage. Once you learn to write software properly you find only like 1 or 2 small use cases where Generics might actually help
Down vote me all you want. Been writing production software in Go for nearly 4 years and have never asked or needed Generics. Saying that there isn't a community because Generics are wanted by "some people" is daft.
The whole point of Go is to be small and opinionated and committees can only produce outcomes that are large and inclusive of all opinions.
To determine how open a language really is look at how many widely used implementations of the compiler there are for the language. If there is only a single implementation of the compiler/interpreter than it is not really open but controlled by that core compiler team.
Most languages I can think of have one very dominant implementation and maybe another one or two that few people use. Python, Ruby, Java, C#, Go, Rust, Haskell... C/C++ are the only exception since Clang became serious competition to GCC and Visual Studio. Even Javascript only really has Chrome and Firefox.
That's "consumerization of IT" for you. Millenials and younger are used to looking at pretty websites of "language ecosystems" and blogs about trivial programming problems to assess languages, rather than independence, maturity, and long-term viability as they used to before, and as demonstrated by having language specs and multiple interworking implementations, pluralism of APIs, etc.
I don't think the number of implementations correlates strongly with long-term viability. As long as there is at least one FOSS implementation available the language won't die as long as it remains useful.
I don't think that is necessarily true. Rust has only one compiler (and I think adding a second one would be a huge effort with no clear win) yet it is developed by a community that is very welcoming to newcomers. At least, that's been my experience.
It is definitely true. There's virtually no room for improving the underlying compiler toolchain because there's no allowance or support for an alternate implementation (using GCC, for example).
Go at least has gcc-go, which is a compliant implementation that gets some benefits from being part of GCC (like not completely broken support for dynamic link libraries!).
Having a second compiler implementation forces the language grammar and behavior to be specified in a formal manner that allows anyone to understand how it works.
The toolchain is built on top of rustup which very purpose is managing multiple compilers. If one were to build a different rustc it could be swapped in even in the world we have today.
I haven't thought consciously about that, but it turns out that I've been doing that kind of validation unconsciously. I didn't get interested in actally using D, for example, until the GDC variant arrived (Dlang-support in the GCC collection).
Gcc compiles Rust fine. Its borrow checker won't bark at you, but that doesn't affect code generation.
OCaml and Haskell are one-horse towns, with substantial history, that are not going away.
But as Stroustrup says, there are languages people complain about, and languages no one uses. You generally end up better off investing your time in mastering a language people complain about. Bitterly. Once that meant FORTRAN.
The Go language is defined by a spec with compatibility guarantees, but beyond embedded systems and WASM, there don't seem to be a lot of use cases for alternate implementations.
I haven't looked into compiling Go to JVM bytecode, but a solid way to do that would resolve an ongoing tug-of-war at work, and avoid a lot of résumé-driven development of stuff maven central already had.
You mean for example various versions of java which are incompatible?! Or java programs said to be mtiplatform loading .so libraries of fixed Ubuntu versions? Well, that's exactly what I hate.
Seeing Huawei got recently blacklisted for all US made chips, parts and some Android services from Google, an interesting question to ask is whether the US government will one day force some foreign companies (e.g. Huawei, ZTE and DJI for example) to stop using programming languages invented & implemented by US companies.
The license can change any time they want. They can change the license to prevent certain companies from using future releases and/or security patches.
Let's don't forget the fact that Huawei had legally binding contracts with all those US suppliers.
> The license can change any time they want. They can change the license to prevent certain companies from using future releases and/or security patches.
Until then, it's a bsd inspired license. And you get code from Cox, Griesmer, Taylor, Pike, Hudson and Clements under those terms. It's a big team on several aspects.
So you can remove your tinfoil hat and check the code for yourself. It is valuable.
Where comes the expectation from, that someone else fixes your problems for free?
If you clone Go now, you get a robust codebase for free, and you could do any maintenance on your own (as a reasonably large organization)
What a strange conclusion. Not likely, but much more likely than that, is that they'll force foreign companies to use Go and other products controlled by US corporations.
This is an important point that people need to keep in mind. From a psychological perspective it is also a little bit evil in that it "feels" like a community and people participate like its a community, but the reality is that it isn't a community at all in the communal action sense.
Essentially Google gets "free" developer time as people work on problems and pitch in possible solutions. Google can influence what is worked on by whining about things, and they are free to take or discard the offerings.
This isn't particularly surprising to me as it feels very similar to the way I felt when working there. Which is that people in engineering worked on projects that they were passionate about, but whether or not those projects got support, or were shipped, or released into products. That stuff was all done "elsewhere" by some group of people who were generally known, but not really part of the day to day. I tended to think cynically of them as 'class A' shareholders, or 'class B' shareholders[1].
I don't think there is anything wrong with managing a program this way, I do however think they go out of their way to create a community illusion to foster more participation. That tells me that if they were up front about things, they feel people might not be as eager to participate. To the extent that they are deceptive in their communications, that I would consider wrong.
[1] Class B shareholders get to vote on shareholder issues, but there is always more voting power in the class A shares so that the class A folks can veto or reject any notion they dislike, no matter how popular with the class B folks.
Actually there are relatively few real (TM) open source projects driven by the community, at least if you look at important projects. Many open source projects are just commercial projects driven mainly by a single company. Look for example at Redis, MongoDB, MySQL, and Elasticsearch. They follow exactly the model described in the article. Technologies like these could have been developed by a community, too, but it is hard to form such a community and keep it alive.
For a community-driven project from the size of a database some serious sponsors would be needed. Good examples are Rust, Linux, and PostgreSQL. I wonder why so many companies are happily paying Oracle (and the likes) tons of money instead of sponsoring an open source project like PostgreSQL.
I disagree about Redis. Redis Labs did a great job at creating an ecosystem of advanced things around Redis. But the Redis core project itself, that is what most people use, is surely sponsored for a big part by Redis Labs, but is executed as a completely OSS community driven project:
1. All the work is done in public, with a license that gives zero protection to the original authors.
2. The project leder (myself) only does OSS work and has no other roles in the sponsoring company.
3. The roadmap is decided by the community (myself with feedbacks from the community), not a company or some product manager or alike.
4. There are multiple people from multiple companies contributing regularly code to Redis: Redis Labs, Alibaba, AWS, ...
5. The main web site is handled by the community.
In the case of Redis this was possible because of the minimality of the project, otherwise I agree that's a huge challenge. But still IMHO Redis deserves to be listed in such "purely community" OSS projects.
There is a slight difference in that all those tools are the basis for their companies' commercial offerings. Google does not sell Go tools or consultancy hours etc. Its interest in Go is to have a programming language that's safe, fast, operationally undemanding and fits the mental model of a recent CS graduate (as per that notorious Rob Pike quote).
This at least makes it easy to work out if incentives are aligned. Do I want to program in such a language? Yes? Then Google is probably not going to completely screw it up, even when they make decisions I disagree with. Do I like to wax rhapsodic about parser combinators? Not ever going to be a good fit.
Actually Mongo doesn't even look at pull requests. So, yes.
Postgres (do we really still need the "QL" reminder?) has demonstrated staying power, and repays invested time in spades. Linux, obvs. Rust, it's still too early to be sure about. Learning it will be at least educational, maybe formative, and at worst it won't be taken away.
Projects like Linux are "community" projects in the sense that there is not a single company backing it, sure. However, for Linux, only 7.7% of contributions were made by unpaid developers as of this 2016 report[0].
I have a hard time calling this a "community" project when the community is paid by corporations. Don't get me wrong, I'm not really complaining, as a project the size of a commercially viable operating system kernel is hardly a project that could be viable without corporate backing IMHO.
[0] https://thenewstack.io/contributes-linux-kernel/
Don't forget that most of the kernel's codebase are drivers and some of the most impactful additions were made by employees of corporations (on top of my mind - cgroups and namespaces by googlers, and not only as a base for containerization).
Been learning Rust for the past few weeks.. absolutely love most of it. It's the first low-level language that actually feels right to me. Of course, I'm one of those weirdos who actually loves JavaScript. I also appreciate Go and C#. Rust just feels like most of the right trade offs, and once Futures and Async/Await stabilize, it'll be IMHO feature complete.
As an aside: The community needs an abstraction for WebAssembly's File System that has something like fcntl/flock functionality. As it stand's the Node interface for emscripten targets isn't good enough. Would love to see a bit more collaboration on this regard for sync and async interfaces for FS I/O that supports file/record locking in a more abstract runtime. Though this has been a pretty big shortcoming with Node since early on imho.
If we look at the Top 10 programming languages, I think only Ruby is community driven that has very little cooperate backing. All the other languages are driven by one (Apple;Swift), or multiple, (Javascript, Java) cooperate sponsors.
I think the balance is hard, you will need lots of resources from Documentation, VM expertise, Library, etc.
The top 10 according to Red Monk[1] are JavaScript, Java, Python, PHP, C#, C++, CSS, Ruby, C, Objective-C, Swift, TypeScript, Scala, Shell, Go, R, PowerShell, Perl, Haskell, Kotlin.
Of these, Javascript (w3), Python (Python Foundation), CSS (w3), C++ (ISO), Ruby (community), C (ISO), Shell (Posix), R (community), Perl (community), and Haskell (community) are not bound to a single company.
[1] https://redmonk.com/sogrady/2019/03/20/language-rankings-1-1...
The JS standard is not worked on my w3, but by Ecma international https://en.wikipedia.org/wiki/ECMAScript
That is correct. Thanks.
Kotlin is also no longer in the hands of a single company. https://kotlinfoundation.org/
Most people there are JetBrains and Google employees.
That’s amazing! When did this happen?
I should have included in my first post. I think rather than splitting between Community or Cooperate, a more accurate representation would be how much Market Value are depending on those Languages. If you have a company dependent on OCaml, let say it is on 30th on the list, but the company is making billions in Net profits, you could bet the company will fund the development of Ocaml, even if it was driven by a Foundation, or Community.
I agree that a clever person such as yourself can construct a metric such that Ruby is the only language that comes out.
> The top 10 according to Red Monk[1] are JavaScript, Java, Python, PHP, C#, C++, CSS, Ruby, C, Objective-C, Swift, TypeScript, Scala, Shell, Go, R, PowerShell, Perl, Haskell, Kotlin.
Top 20?
Oei, call me dany coz I cant count.
Companies pay Oracle because 1) PostgreSQL doesn't take you to lunch and 2) Oracle "just works." I mean, we all know #2 is effectively a lie but having the option to go to Oracle directly for issues (even if the response is just "pay a contractor") immensely reduces risk for executive leaders.
You can hire a Postgresql contractor for admin, bugfixes and features, and you can pay Postgresql corporate for bugfixes and features
Last time I was in a position to try, not so much. I emailed about 15 such contractors at least close to the SW US (I'm in Phoenix), and didn't get a single response. This was 5-6 years ago.
Are they paying Oracle though? A quick Google found this:
https://seekingalpha.com/article/4229086-oracle-growth-dead
If Oracle revenue is no longer growing, maybe new development leans more towards Postgres and other open source offerings?
> Revenue US$39.83 billion (2018)
Even not growing, they are indeed paying Oracle.
Isn't that mainly for integrations though? I would think straight Oracle RDBMS licensing would be on life support without some big ERP connector etc.
I'm curious what the distinction is between Rust (driven by Mozilla) and Go (driven by Google).
By my count, fewer than half of Rust's core team [1] are Mozilla employees, whereas the number is 100% for Go. Rust also has a very clear and open governance model [2], where the Go team is more closer to "we'll do what we want".
[1] https://www.rust-lang.org/governance/teams/core [2] https://www.rust-lang.org/governance
The governance stuff is interesting; are there any programming languages that are structured in between "community driven" and corporate oligarchy? ie. non-profit with dedicated Finance/HR teams?
Also Rust is observably a better Go than Go.
What an astoundingly ignorant thing to say
It'll be that too, eventually. But not yet!
Rust has largely broken out of Mozilla already. If it can maintain its growth rate (not guaranteed, but so far so good) it will have a future. Check back in five years.
It's a very different language than Golang though, very expressive, generics, no GC, etc.
You say that like those are bad things.
Yes, the borrow checker is a PITA vs. Go, but there are compensations.
These are not bad things if you’re writing system code.
Is there ever a plan for GC?
There are garbage collection libraries in progress: rust-gc[1] and shifgrethor[2]. If you mean language support, then no, not really, though it was discussed in https://github.com/rust-lang/rfcs/issues/415
[1]: https://github.com/Manishearth/rust-gc [2]: https://github.com/withoutboats/shifgrethor
I feel like not having a GC is rust’s biggest advantage over higher level languages. It allows rust to work in environments that only languages like C, C++ can work in. (Embedded, shared libraries, etc)
I more meant "GC for those who want it", as advanced GC with Edens and mark-sweep tenured generations can handle high-allocation and large-heap scenarios that are extremely challenging for simpler MM approaches. I don't know if this really can be done as a library though.
I had a similar question, but rather, what's the difference between Go with Google deciding what goes in, and Python (a year ago) where the BDFL and a couple core devs decide what go in?
Sure you could argue that a company may have different incentives than a BDFL, but in this context, it's not clear that Go would've been more likely to accept the change you're proposing if they weren't being led by Google.
I guess the most important distinction is the “B” part (benevolent). Guido is called that because he is known (since before BDFL is a thing) to listen to other people, and adapt when they disagree strongly with his decisions. Google has never demonstrated the same attitude afaict, and in multiple occasions showed exactly the opposite.
Edit: And to answer the question, no, there’s no philosophical differences. And there’s nothing wrong with that. Python never called itself a community’s language (there are instances core devs said in no ambiguous terms that it is Guido’s project). The problem only arises if a project gives an impression that it is owned by the community when it actually isn’t.
Given that the go core team has committed to adding generics I don't think it's fair to say they don't listen to critique from the community. Is there a more contentious issue in go?!
Not really, that is the hand waving when the issue ever pops up.
Rob Pike already stated publicly that he is against the proposed idea for Go 2.0.
"Rob Pike - Go 2 Draft Specifications" - https://www.youtube.com/watch?v=RIvL2ONhFBI
I wonder if they'd ever consider adding hygienic macros instead of templates/generics. To me, that solves the problem of needing to write the same code for float32+float64 and so on. And it doesn't require so much careful thought about the type system, accidentally creating an awkward metalanguage like happened in C++.
Rust people are very glad that Rust brought high-kinded types to replace the most common uses of hygienic macros. Using macros for everything is a nightmare.
About using them as generics, how do you enforce constraints? Unconstrained generics won't lead you far (or better, will lead you far into JS's Wat territory).
IMO, Rust has a very nice macro system, and this is a good thing (although I'm not familiar with any changes they've made in the last year or two). It's a bad thing that sometimes one needs to resort to the macro system because other features in Rust don't play well together, but that's a long topic with no solution in sight.
> About using them as generics, how do you enforce constraints?
I'm really thinking about the case for numerical algorithms where the exact same code works for different types (say float32 and float64 in Go). It sucks to copy and paste hundreds of lines of code and it sucks to have a separate code generator write your file for you (essentially an external macro processor). Imagine something like the C preprocessor for Go, but without the well known flaws of the C preprocessor:
That parametric polymorphic enough for a lot of use cases, and this could work with data structures too.As for constraints, you can pass function names as arguments too. For instance:
Other proposed features in go, such as the "check" statement for error handling, are probably implementable as a nice macro if you had a good macro language. This means the core language wouldn't have to grow, and features like "check" could be imported from a library. Including features like this in a library means the core language isn't bound by backwards compatibility when a better idea comes along. Old code used the old library, new code uses the new one, and the language stays clean and compatible with both.This is how Borland C++ provided support for generics before they got into the ongoing ANSI/ISO standardisation process.
For their first implementation of BIDS, initially provided with Borland C++ 2.0 for MS-DOS.
Finding out the release date is left as exercise for the reader.
I see you're reinventing C++
I guess you missed the part above where I mentioned C++ templates as a cautionary tale about accidentally creating an awkward metalanguage. Besides, Go and Rust are both clearly responses to C++, so it makes sense they would try to provide (reinvent) similar capabilities while avoiding the flaws.
Rust does not have higher kinded types.
> Rob Pike already stated publicly that he is against the proposed idea for Go 2.0.
Rob Pike is not a member of the Go team anymore, and he has not been for several years.
That's sad. Was it because of disagreements?
> That's sad. Was it because of disagreements?
No, he moved to work full-time on Upspin: https://github.com/upspin/upspin
Wow. That makes me happy. The last thing we need in Golang are generics.
Wasn't Linus the original BDFL?
Seems to have been Guido.
https://en.wikipedia.org/wiki/Benevolent_dictator_for_life
> The phrase originated in 1995 with reference to Guido van Rossum, creator of the Python programming language.
I think the B stood for something else in his case...
There are some who would argue it's a "soft B" - his recent conf outburst isn't the only example of him failing to listen and adapt.
Guido always took the role of gatekeeper and arbiter (I'll consult with the community and decide what submissions go in) rather than a true dictator (I'll decide what goes in... submissions? What submissions?).
Prometheus is a (rare) recent example of a significant, thriving open-source project that is community-based. Not quite as broad in scope as something like Elasticsearch though.
Depending on what "community-driven" means, I'd like to add Python, Blender, Jupyter, Django ...
I'd say "thriving community-based open source projects" are rare in the sense that most open source projects don't thrive (especially if you count every open-licensed repository on github), but there are tons of examples.
But it started out as a Soundcloud project...
Apache and Linux have always been the two big examples of community-driven open source projects. And Apache still hosts many community-driven open source projects, but they don't seem to be as visible as they were 10-15 years ago.
Some Apache projects, sure. But Spark is very much a Databricks project. Hadoop used to be fairly community as there were hands in it from Cloudera, Hortonworks, Microsoft, and a few others. But with the merger of Cloudera and Hortonworks, the expectation is that it will become guided by that organization.
For community driven programming languages I would say PHP, large user base, mature & very active.
I'd use this as an argument that if PHP is the gold standard for a community-driven programming language, I'd rather every popular language be backed by a large corp haha.
Actually, even ignoring PHP, I'm vaguely convinced it's generally better for a language to be backed by a company. I personally feel more secure knowing that there are people whose full-time job is to take care of the language, and I trust community backlash to deal with any errant decisions. I can't imagine Google (or Microsoft, or Apple, or Facebook) making or blocking a change in a way that kills an entire programming language while they sit idly by ignoring the community response.
Purists will agree with you. Pragmatists perhaps not so much.
I'm sad to see this opinion every time somebody mentions PHP. PHP may have its flaws. But how can anyone deny the instrumental role PHP has played in building the web as we know it today?
The open web, open source, open standards, agile were supposed to be the free market answer to all the flaws of the big old corporations. And for a large part they have succeeded in creating the new and better world we enjoy today. But it looks like we're experiencing some regression. The new big corporations of the web are becoming more like the big corporations of the past. Corporate structures, HR departments, Shareholder meetings, Management layers, system thinking are again replacing the success of uncertain organic growth models that involve chaos, diversity, agility, attitude and pragmatism. To see media outlets advertising social media handles as opposed to their web addresses. Purists are on the rise. Perhaps because everyone is seeking those comfortable high paying jobs at the big corporations?
PHP has fought and won its battles. Outfoxing many big corporate opponents along the way. It should get the respect it deserves. And if you ever find yourself at the front-lines of a new battle. PHP might still very well be your most effective weapon of choice. A worthy consideration at least.
> Purists will agree with you. Pragmatists perhaps not so much.
Golang is one of the most pragmatic languages there is.
Go is pragmatic about language design, but purist about managing community whiners.
Is modern PHP really that bad? I kind of think it makes a lot of sense in a world where almost everything, even major government IT systems, are basically java/.net on the backend and some JavaScript MVVM framework on the client.
I know a guy who works with laravel, and we’ve teased him so much about PHP over the years, because PHP. The truth is, that his backend is actually more suited for the modern world than what we currently run. .Net core is getting to where it’s competitive, but it’s still a bitch to build something like a mixed asp mvc and Vue components app, which is truly effective/productive alternative to MVVM clients or jquery/Ajax for smaller projects.
The thing I "envy" about PHP guys is that they aren't on proglang forums; they are somewhere happier making money and clients happy, and not giving a f* about the things our niche here does...
I started doing PHP dev during the 5.2, 5.3 times and since 7, it feels much better to program in. It even has RoR-like frameworks like Laravel. So, use Symfony or Laravel if you need to use PHP.
It has some historical baggage that some people can't get over. Nothing, no programming language will be perfect. If I had to start all over, I'd probably use something else but it works great for what I'm doing. (web dev with Laravel)
Visual Basic was basically(!) killed by Microsoft albeit large protest, so no, putting your trust in one company does not always work.
PHP has been developed very well the past few years as community project, there is an open RFC process with a codified voting process in place. Sure there have been some drama within the community, but that has not affected either the language nor the implementation.
What makes PHP a great fit for a community driven project is that PHP is a pragmatic language in its core. If the general goal of a project is to design a perfect & consistent language then a community driven process is maybe not the best approach.
It still lives on fine, turn on the developer tab in Excel and there is VBA 7.1 (which is basically still VB6 with a different forms engine)
Let's hope we get some form of generics because in my recent use of it, I can see the usefulness of type information for parameters and return types but not the complete inability to say that this function will return an array of a certain type; I can only say that it'll return an array (which might have anything and everything in it), which is useless.
Visual Basic lives on as VB.NET and to this day VB 6 runtime keeps being updated to run on the latest versions of Windows.
VB is de facto dead, declared legacy in 2008 & latest major release was in 1998. VB.NET is a new incompatible language, everyone invested in VB needed to retrain skills & rewrite existing software.
It was a huge uproar in the community, but MS didn't care enough & this goes against TS idea that the community can stop a major company doing this sort of thing. It can happen & it will happen again.
However that does not mean that you shouldn't invest resources in a corporate language, it just a false argument to think it can't die.
Edit: changed disappear to die
Yet on a past life I managed to migrate code without any major issues, other than components that weren't available on .NET.
Just like I said then
"VB.NET is a new incompatible language, everyone invested in VB needed to retrain skills & rewrite existing software."
> without any major issues
Visual Basic and VB.NET are two very different beasts, only sharing a name.
The VB6 runtime is indeed updated to run, but really how much update does it need beyond sticking the DLLs around? The VB4 and VB5 runtimes run perfectly fine on Win10 and those had no updates for it. It all relies on Win10's overall backwards compatibility.
They are still pretty similar, Microsoft re-introduced quite a few features back.
They are similar only on a superficial level, at their core they have not only vast differences in terms of how they work but they also have a different philosophy. It is not a matter of what you can do in terms of features, but also how you do it and how that affects the IDE, which is a core element of VB6 as opposed to VB.NET where you could be using whatever text editor or IDE you want (it isn't a coincidence that VB6 has its own IDE whereas VB.NET is using the same IDE as C# and C++).
VB6 isn't just the language, if anything the language is a minor part of it. VB6 is the entire tool, including the language, IDE and library. You cannot separate these, they are written for each other. This is also what pretty much all VB6 alternatives get wrong as they try to reuse elements designed for something else. You cannot do that and get something like VB6 beyond at a superficial level (and this is exactly what Microsoft did with VB.NET).
I used VB 3 - 5, later on did a couple of ports from VB 6 to VB.NET, and I am lost on what you actually mean.
Doesn't surprise me, you are not alone in this as others who have used classic VB do not really see the difference between classic VB and VB.NET and to some extent not even (most of) Microsoft seems to get what makes classic VB special :-P. However i'm not sure how i could describe it better than the last paragraph in my reply above as to me the difference between the two is day and night.
It is kinda similar to not understanding how something like a jury rigged Vim with a GDB, file list manager and ctags parser differs from a real IDE even after having used the latter. Usually i'd say "you need to experience the real thing" but if you do not get it after you do, then i'm at a loss of words.
Having observed the decision-making process for 6 months and thought about PHP's direction, I'd agree.
Either a foundation is needed, which sets a vision and sticks to it, or a company, or a BDFL. But with none of them, you get each language faction pulling in its own direction, bits of each added/retained, and no coherence.
PHP BAD!
Not sure if you're arguing for or against the stake at hand :P
LLVM is also an interesting examples: it was first driven by Apple but now it's driven by many different large companies. I mean it's not completely driven by community but i think it's better than dominated by single authority
Debian is 1000 developers with mostly decentralised decision making with some voting for project-wide issues using Condorcet.
Great non-language examples are Debian and KDE
> I wonder why so many companies are happily paying Oracle [...] instead of sponsoring an open source project like PostgreSQL.
Corporate world is ruled by liability. When you are working on a multi-million dollar project and the database breaks, you want to be able to put as much responsibility on the 3-rd party as much as possible.
Open Source projects come with no guarantee. If someone hacks into a system because of a flaw in the Open Source project you are screwed. If it's a software delivered by Oracle, then Oracle is responsible and needs to pay your million dollar fine.
Of course I'm simplifying, because there's probably still a lot of legal process behind it depending on the country, but it's essentially that.
> * is responsible and needs to pay your million dollar fine.
This is not really true. AFAIK there is no commercial software that has a license clause making the manufacturer liable if you loose your data due to faults in their product.
We lost a couple of TB due to bad OS/firmware in a NAS. The manufacturer did everything they knew trying to fix it, but in the end failed and we lost the data.
Once we lost a crucial virtual machine and backups due to human error. It was not the manufacturer of the virtualization solution that helped us get it back, because they did not have such tools and did not provide such services. Hackers who were reverse engineering the software and publishing their reversed code on the net helped us get the VM image back.
After these two and some other situations I am even stronger proponent for open source, because with open source you have the option to try help yourself if the manufacturer won't or can't help you. I am not just proponent, in fact if I was to be involved in decision making, it would be open-source solution throughout. Of course with support. If possible I'd pay for support services to the original developers.
> then Oracle is responsible and needs to pay your million dollar fine
Has it ever happened though? Genuinely curious.
> I wonder why so many companies are happily paying Oracle (and the likes) tons of money instead of sponsoring an open source project like PostgreSQL.
Because that are plenty of nice enterprise features that PostgreSQL still doesn't cover.
My thought was a bit deeper than that. Actually I wonder why are not more companies sponsoring PostgreSQL to get the features they need instead of paying Oracle?
Could you name some I’m pretty curious?
For my use case it’s a no brainer because only PostgreSQL have a decent GIS extension, last time I checked Oracle was lagging well behind.
- Distributed transactions across cluster nodes
- The IDE experience with PL/SQL, including graphical debugging of stored procedures
- Compiling PL/SQL to native code for better performance
- Running bare metal without an underlying OS
- Oracle RAC
- The fine tuning options on their JDBC and .NET drivers
- A proper C++ driver API
As an "enterprise" Oracle user, I'd give all those up in a heartbeat if Oracle would just support proper indexing on Json values (well CLOBS with a "is json" constraint in Oracle - bleh). Meaning that I wouldn't have to execute ddl to create a function index on each new field path within the json that I want to support. IOW I want a Postgres path ops GIN index which work beautifully.
I don't have the stats to back this up, bit that all seems very niche. I have to imagine you get downvoted because people don't recognize these as real requirements. I needed not a single one of these. Ever.
> I needed not a single one of these. Ever.
Enterprise software tends to have lots of features that the majority of folks will never encounter in their career. The enterprise market space is composed of around a thousand potential customers world wide, all of whom are large enough to have sophisticated and complex internal computing environments. They have varying norms, requirements, workloads, regulatory environments, industry standards and so on and so forth.
Each has a lot of money and rejecting the requirements of one company often means you are effectively rejecting several, or perhaps even an entire sector. So you add something to cover them and before long, your software has the union set of features required solely by enormous companies.
From a non-enterprise view a particular feature may seem like absurd overkill. But someone, somewhere, needs it and there is a long and often impressive story of how it was achieved.
Compiling SQL -> native I'm not sure deserves to be described as niche, since lots of analytical queries can benefit from that. Worth pointing out that Postgres has a JIT compiler [0].
[0] https://www.postgresql.org/docs/current/jit.html
Like automatically refreshing materialized views, by which I mean when a base table is changed, the engine uses the definition of the materialized view to run triggers to update materialized view.
This is related to default implementation details, this behavior can be achieved using trigger/materialized view on PostgreSQL as well.
My entire point is that I shouldn't have to write the triggers myself. PostgreSQL figures out the triggers from my definition of the materialized view.
The simple and sad truth:
-Nobody gets fired for buying Oracle. -It's easy .. explaining an investment in open source .. good luck
"We'll save $500k a year in licensing costs and don't need to re-negotiate a contract to update our architecture" has worked for me.
> Nobody gets fired for buying Oracle.
Nah, even when Oracle was more dominant than they are today, that wasn't entirely true:
https://www.cnet.com/news/california-cancels-oracle-contract...
> Nobody gets fired for buying Oracle
I can think of environments where you'd get fired. Oracle these days screams "legacy", and especially if you're working on low-latency projects like trading systems it would be a huge, expensive mistake.
Aside from JS/Node and maybe C++, I can't think of any languages or platforms that are nearly as community driven. Java is probably the closest.
What about Python? AFAIK it is the best example of big community-driven project.
I hurd about a thing called GNU...
>I wonder why so many companies are happily paying Oracle (and the likes) tons of money instead of sponsoring an open source project like PostgreSQL.
Well does PostgreSQL offer anything like Oracle's CDC?
Edit: Nevermind, I just read up on using a WAL replication slave with triggers.
Many like Go because it is an opinionated language. I'm not sure that a 'community' run language will create something like that because there are too many opinions. Many claim to represent the community, but not the community that doesn't share their opinion. Without clear leaders I fear technical direction and taste will be about politics which seems more uncertain/risky.
I like that there is a tight cohesive group in control over Go and that they are largely the original designers. I might be more interested in alternative government structures and Google having too much control only if those original authors all stepped down.
My thoughts exactly! It's important to have a community and to work with it, but, especially for a programming language, there has to be a clear concept of which features should be implemented and which not - just accepting community contributions for the sake of making the community feel good would be the wrong way. Otherwise you end up with a feature monster like innumerable other programming languages, and that's exactly what Go doesn't want to be.
Go is somewhere in the middle of the spectrum between Lua (dictatorship) and C++ (endless politicking).
My fear is that they end up integrating generics just to please some vocal group of java/c++ programmers.
In my opinion the way they are handling adding Generics to Go is proof of this model working.
They are actually trying to pick the implementation that solves real world issues, not just trying to tick [x] Generics in the Go spec sheet.
Parametric polymorphism is a real world issue. The fact that it wasn't checked off indicates to me that they are of the opinion that it's not as important.
It's ok to have this opinion, I just disagree with it. I do agree that it's important to think it through, I do not agree that the language should be brought to 1.0 without it.
This is the general point though. Community pressure would have produced generics under a different leadership model that wants to check the box.
Whether this is the right decision for this specific issue is a different story. But the fact that they aren't checking boxes for the sake of it is evidence of this style WAI.
WAI?
I would guess Working As Intended.
> Parametric polymorphism is a real world issue.
...what in the world makes you thing that? I love parametric polymorphism, but nothing stops you from writing the most complex computer programs you can imagine without it.
> but nothing stops you from writing the most complex computer programs you can imagine without it.
Sure, you can do that with punch cards as well, but there is a reason we've collectively moved beyond them.
> ...what in the world makes you thing that?
Go literally special-cases generic builtins because they couldn't be arsed to properly implement it but knew nobody would accept completely untyped core collections.
Go does not have generic arrays any more than C does. You cannot e.g. write a generic Go function to reverse an array.
You seem to be conflating type-parameterized collections with generics. You can use generics to implement type-parameterized collections, but it doesn't really make sense to think of type-parameterized collections as a form of generics unless you can actually abstract over the type parameters (which you can't in Go).
> Go does not have generic arrays any more than C does.
Go does have generic collections, and generic functions operating on these collections.
> You cannot e.g. write a generic Go function to reverse an array.
You can if you're part of the core team and implement them as builtins. Go doesn't have userland generics, because users of Go are peons who can't be trusted with sharp implements.
> which you can't in Go
Because the core Go team assumes and asserts users of Go should not and can not be trusted with anything more complex than a wooden cube with rounded edges large enough that it can't fit in their mouths.
Would you please drop the nasty rhetoric and not do programming language flamewars on HN? They lead to harsher, dumber discussion, and ultimately cause a forum to destroy itself. Learning from those past lessons was one of the motivations for starting Hacker News years ago, and it's not like we want to forget them now.
Edit: we've had to ask you half a dozen times in the past not to post aggressively and uncivilly on HN. Would you please fix this? Reviewing the site guidelines should make it clear what we're looking for instead.
https://news.ycombinator.com/newsguidelines.html
According to this usage, C has generic functions because you can, e.g., index into an array of ints. The whole point of generics is that the language makes available some form of abstraction over types. Go does not have this feature. It has some built-in operators that can operate on multiple types. Most languages have this. The arithmetic operators in C, for example, can take operands of many different types. This is not "generics".
>Because the core Go team assumes and asserts users of Go should not and can not be trusted with anything more complex than a wooden cube with rounded edges large enough that it can't fit in their mouths.
This can't really be the right explanation given that generics are now being added to the language. In any case, this kind of mean-spirited speculation about people's motives is borderline trolling, IMO. The Go team have publicly stated their reasons for not (initially) putting generics in the language. Unless you have some inside info suggesting that they're lying, I'd refrain from saying this kind of thing.
> The whole point of generics is that the language makes available some form of abstraction over types. Go does not have this feature.
Go absolutely has this feature. Just not for you as a user of the language.
> It has some built-in operators that can operate on multiple types.
Go has multiple builtin functions which abstract over type parameters e.g. close() can operate on any chan, append() or copy() can operate on any slice. They are not operators (symbolic or keyword).
> Most languages have this. The arithmetic operators in C, for example, can take operands of many different types. This is not "generics".
It's overloading, +(int, int) and +(double, double) are separate functions under the same symbol. It's an orthogonal feature, so much so that there are plenty of languages which do have userland generics and don't have overloading.
> This can't really be the right explanation given that generics are now being added to the language.
The core team has been saying they're considering the feature / issue pretty much since the language was first released. You'll have to excuse me if I don't hold my breath.
>Go absolutely has this feature. Just not for you as a user of the language.
That seems like a roundabout way of saying that it doesn't have the feature.
>It's overloading
You could just as well regard Go's close, append, copy etc. as overloaded.
> The core team has been saying they're considering the feature / issue pretty much since the language was first released. You'll have to excuse me if I don't hold my breath.
They've come out with a specific generics proposal. I'm not sure why you would think that they're not serious about implementing it.
> You can if you're part of the core team and implement them as builtins. Go doesn't have userland generics, because users of Go are peons who can't be trusted with sharp implements.
It looks like you are in golang core team's minds. You appear to be able to judge intent. Impressive quality you have here.
> It looks like you are in golang core team's minds. You appear to be able to judge intent.
Not at all, Rob Pike stated it fairly explicitly if in somewhat milder terms:
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
> users of Go are peons who can't be trusted with sharp implements.
Pike himself is a user.
But that is exactly the point - you would have added the best proposal available to the language vs. waiting till all issues are worked out with generics. Go 1.0 has been released 7 years ago, in that time a ton of happy Go users were able to use Go as it is. All the discussions on generics clearly show, that the Go team is interested adding them, but that all proposals so far had distinct shortcomings.
Personally, I am very happy with what Go offers today, I would rather keep a small and simple language, which allows me to concentrate on doing work, rather than trying to keep up with features added to the language. I am not even sure, I want generics to be added to the language, until they come up with a really great concept which maintains the simplicity of the language.
ML (1973) and CLU (1975) introduced generics to the world, followed by a myriad of approaches to implement them across multiple languages, so plenty of time.
No one claimed, that languages with generics don't exist. But if you follow the discussion about generics in Go, Russ Cox did a thorough discussion of all the proposals on the table and why they would mean giving up some of the core traits of the Go language. As soon anyone suggests an implementation not colliding with the core Go goals, the Go team probably would pick it up quickly.
Or phrased it in another way: with all the years of experience on generics, they still got them wrong when implementing generics for Java, e.g. with the type erasure.
Not at all, he focused on how Java and C++ do it, and later on the Go 2.0 proposal admited that they were a bit closed minded to look for how other language implementations work.
"In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."
https://go.googlesource.com/proposal/+/master/design/go2draf...
This is definitive proof that the lack of generics was a mistake from the beginning.
I'm not sure what GP meant by real world issues, but they do have some specific constraints that they want their implementation of generics to achieve, one of the core team members tried to nail down the details over a period of years but there were issues with the previous proposals before the latest one. So I don't think the issue is parametric polymorphism or not, it's more about things like keeping fast compile times, avoiding boxing value types, etc.
Parametric polymorphism is a tool to solve real world issues with.
It is not unthinkable that there could exist other programmatic tools that would be able to solve the same real world issues.
The one thing that I do like about Go not having generics early on is that it became a language for lower-level work with many good libraries. I wouldn't quite call it a systems language like they like to say but I like its positioning which filled a void.
Now when Go 2 gets generics it will be used for all kinds of applications and frameworks, which is fine but isn't really filling a void. I expect the ergonomics (e.g. wicked fast compiles to single binary) to be better than current app dev langs. I stopped using Go and look foward to trying it again. Even then I don't think it will be one of my favorites and likely used for smaller work. Long-term I want Pony to succeed. Medium term I'll take Kotlin, Elixir, or Dart.
It would objectively have made Golang harder to read. Can we agree on that? People tend to abuse generics.
Uses of https://golang.org/pkg/sync/#Map are objectively harder to read than if they allowed a library to implement a type that conforms to builtin map[K]V. Instead you have to learn method names and cast return values, and builtin maps don't even support those methods.
Let’s agree to disagree. Maybe you’re used to reading generics after years of doing it.
Is there any example of a successful large open source project that's truly lead by the community, and not a handful of people that decide what goes in and what doesn't? What would such a model even look like?
I think the key component of such a model is a clearly communicated method where the community can participate in the project direction, and evidence that the method is being respected. It's been brought up in this thread but Rust's governance/RFC process comes to mind. https://www.rust-lang.org/governance
you can see it already on github - survival of the fittest fork
In theory yes, in practice though, there's a lot of projects where the lead maintainer has vanished or doesn't spend a lot of time on it anymore making the future of the whole project and its forks debatable. Most forks aren't interested in taking the lead. The only exception I can think of was the nodejs fork, IO.js, which ended up being more of a political move and a kick up the arse for the Node team. A similar kick was given to the NPM team when Yarn started making waves.
The Rails/Murb drama a few years ago is another good example I think. A bunch of people decided they didn’t like the directions Rails was going in, created a competing framework, and after proving the concept was sound ended up introducing a lot of the unique features back into what became Rails 3.
Absolutely agree.. look what happened to C++ over the last decade(s).
Not sure - I thought it got significantly better with C++11/14? Admittedly it also became rather more complicated, but the changes were generally for the best I thought?
From my five years of learning and using C++, I still have no clear picture how move semantics and rvalue references work. (I “kinda” get it, but am not confident about it). It seemed more and more convoluted every time I tried to study about it. The complexity created by implicit and explicit copy/move constructors is just insanity for me...
> From my five years of learning and using C++, I still have no clear picture how move semantics and rvalue references work. ... The complexity created by implicit and explicit copy/move constructors is just insanity for me...
This is the best argument for move from C++ to Rust instead. No "move constructors" whatsoever, move semantics are the default and are always performed via a trivial memcpy. There are explicit .clone() operations, like in some other languages, or the move construct can implicitly copy when the type is a POD-equivalent that makes this possible (identified by the Copy "trait"). Very simple, and nothing like the whole C++ move mess.
So what if I have an object that has a pointer to another object? memcpy is not what I want in that instance.
Assuming struct A has a reference to struct B, memcpy is a fine way to move struct A, and struct B cannot be moved while borrowed.
Here's an example I put together demonstrating this: https://play.rust-lang.org/?version=stable&mode=debug&editio...
If you mean intrusive data structures, Rust just doesn't support them. Everyone still manages to write software in Rust just fine. (There's some early support for immovable data with `Pin`, but that's only exposed via async/await at the moment.)
Move constructors are one of the worst parts of C++ and being able to write movable, intrusive data structures is absolutely not worth the cost. If you do need one, Rust shows that it's better to sacrifice movability than introduce move constructors.
This is not uncommon -- there are some things you can do in C++ that you simply cannot do in Rust (either safely, or at all). Usually there's another, better way to achieve the same overall goals.
> Move constructors are one of the worst parts of C++
Move constructors fix a narrow problem, which is, when you have something like a vector append, how do you copy over all the previous elements as a "shallow" copy rather than a deep one?
In C with realloc(), it's just assumed that memcpy works for that. With C++03 and earlier copying the elements could very well end up duplicating everything on the heap for no reason, then discarding the old copy.
How does rust do this? What I am reading from googling is that every assignment is a move?
Yes, every assignment of a non-Copy type is a move. (Copy types are basically primitive ones like ints, so not vectors etc.) The compiler prevents you from using the old variable at that point.
Since Rust always enforces that a move is a memcpy, a vector reallocation is just a realloc() like in C.
Safe rust may not support them, but you can use unsafe to write them. There’s a bunch of implementations of intrusive collections on crates.io.
Also, Pin is stable but async/await is not, so you’re backwards there :)
Rust doesn't move through the pointer, it moves only the pointer, which should be perfectly fine.
Read "Effective Modern C++: 42 Specific Ways to Improve Your Use of C++11 and C++14" by Scott Meyers.
That'll clear it up for you. It's a very good book. rvalue references are a bit muddy but move semantics should be clear.
Do you have Stroustrup's blue book?
If you give Go to the community, the community will add exceptions... yuck.
Go already has panic/recover which where explicitly proposed as an exception like mechanism for go.
The convention in the Go libraries is that even when a package uses panic internally, its external API still presents explicit error return values. -- https://blog.golang.org/defer-panic-and-recover
I belive this means that using this mechanism similarly to try/except in Python is not possible (I write 20 lines in go so this is quite a wild guess)
And generics, ew
It's a dangerous type of articles that deliberately turns community against Go team, based on misunderstanding or plain false accusations.
Go team said many times that generics are technical issue, not a political one. (see [1] by rsc (Russ Cox from Go team))
There are also stories like experience report of Cloudflare outage due to the lack of monotonic clock support in Go that led to introducing one. [2]
The way how Go team handles potentially tectonic changes in language is also exemplary – very well communicated ideas, means to provide feedback, and clear explanation of how process works. [3]
Plus, many people in Go community do not want generics in Go (at least, in a way they're implemented in other languages). Their opinion also matters.
[1] https://news.ycombinator.com/item?id=9622417
[2] https://blog.golang.org/toward-go2
[3] https://go.googlesource.com/proposal/+/master/design/go2draf...
In my opinion the monotonic clock example cuts against this argument - originally the core team was extremely dismissive of the idea, even though it had been shown to cause pain for a lot of people: https://github.com/golang/go/issues/12914#issuecomment-15075...
It's a great example, as it shows the complexity of the discussed topic and attempt to justify any changes to the language/stdlib. In the above-mentioned comment rsc replies to another Googler, not to "community".
So we have
a) Google having problems with lack of monotonic clock in Go
b) Go team reluctant to break API and break promise of compatibility without really serious reason
c) community feedback in form of well written experience report, explaining how serious the issue is for the community
d) immediate reaction and efforts to find a solution (without breaking API)
Even though team was dismissive of the idea, feedback from the community made them to change their minds.
Or, you can view this from another angle, that the golang authors yet again disregarded previous established work in the industry for the sake of avoiding hard work in the language and the compiler. It's no wonder the time package in golang is garbage compared to established offerings in mature languages like Java and C#.
It's no wonder that a new language is lacking the maturity and features of an old language. Of course you can attribute it to 'avoiding the hard work', but that's the same reason you don't live in a crystal castle. A more salient criticism would highlight what they have been spending their time on, rather than pointing out that they haven't spent much time on something.
The logical thing to do is to build on what other languages did, and use established practices. Comparing what Rust did to golang when it comes to generics for example is enough to show the mentality of the golang authors refusing to look at established work.
I disagree, as I remember a lot of talks and articles from Go team members where they discuss in detail established work in other languages – on GC, on language evolution, on generics and so on – and deliberately learn from them to avoid same mistakes.
It's not a matter of agreement or disagreement. It's facts. Another user on here put it well: https://news.ycombinator.com/item?id=19979613
The fact is Go team learns from other implementations and it's clear from talks and articles. Another fact is that you're accusing them in having the "mindset of refusing to look at established work". Those two facts don't get along together, that's why I disagree.
They stuck their heads in the sand refusing to have a package manager for like 8 years and asking people to put dependencies on a vendor folder.
That's not learning from established work.
First, Go had package management – it just has been optimized for monorepos.
Second, since early days of Go, they said that acknowledge their Google monorepo bias, and can't implement package manager that works for everybody without understanding what people outside of Google need. They consciously gave time for community to grow and mature and get enough feedback before implementing proper solution, while actively studying package management solutions from other languages.
I definitely can't call it "refusing to learn from established work". Of course, they could've just copy existing suboptimal solutions without properly giving it a thought – that's what most of the languages do, after all – but that's where Go choses another path.
because implementing package manager well is not a trivial work. I had no problems with vendor (godep), later I used Dep and now modules. There are still some problems with module handling for vscode (gopls) but that will be sorted out over time. If you don't like the way Go is going, what stops you from using Java with maven, gradle or whatever...
But the implementation that rsc came up with was much better than everything proposed by the community (no API change).
Crisis is always a great opportunity as the pressure motivates people to ship, while also to do their best, to really consider any and all options, everything that might work.
Sometimes, of course, this does not work out for the best.
The article barely even talks about generics, though (and takes a favorable position towards the Go team). What are you on about? Did you read the same article I did? https://utcc.utoronto.ca/~cks/space/blog/programming/GoIsGoo...
The author's primary complaint is the way Go modules were handled, namely that one member of the Go core team overrode the entire community.
> barely even talks about generics
Yet uses it as an opening line to build the whole argument on.
He doesn't build the whole argument on it. Generics just happens to be one of the most polarizing, most talked about subjects in the Go community. It's no surprise the author alludes to it. In his own words:
> PS: I like Go and have for a fair while now, and I'm basically okay with how the language has been evolving and how the Go core team has managed it. I certainly think it's a good idea to take things like generics slowly. But at the same time, how things developed around Go modules has left a bad taste in my mouth and I now can't imagine becoming a Go contributor myself, even for small trivial changes (to put it one way, I have no interest in knowing that I'm always going to be a second class citizen). I'll file bug reports, but that's it. The whole situation leaves me with ambiguous feelings, so I usually ignore it completely.
I'm going to risk being labeled an ~incompetent dev~ or whatever but learning golang was seriously a breath of fresh air compared to literally any language I have ever tried to grok before.
Everything felt like it was there on purpose. It always seemed like there was a "proper" way to achieve something. Being told to use this opinionated formatter was like removing a 40kg bag after a bush walk. You never have to worry about if you're writing Go "the right way", because it's extensively documented what that way is.
Generics is an awesome feature, writing C# is my day job so sometimes I miss it, but I have full faith in the designers when they say it will be in the language in a "Go appropriate way". The last thing I personally want to see is Go being handed over to the community to be designed by committee.
I completely agree. When I first started writing Go (coming from Python/C#) I complained an awful lot about what felt like pointless hamstringing of functionality. Go is a simple language, and you don't get many toys. It also feels very verbose at times, and forces you to think a lot about doing things which seem automatic in other languages.
However, as time went on, I noticed a few trends. Firstly, forcing me to think more revealed that what I had thought was automatic was more automagic - Go forces you to take responsibility for what your code does to a far greater extent than other more convenient languages, albeit not to the extent that C/Rust does. Secondly, I noticed that code written in Go tends to do what I expect it to pretty much all the time (with the occasional except of async stuff). Sitting down and writing a program in Go often results in something that actually works the first time it runs, and has far fewer runtime surprises.
As painful as it is to do numerical computation in Go sometimes, I have a very high level of confidence that I can look at a program, deduce with some accuracy its runtime memory usage & footprint, multithread it easily, and reason (successfully) about possible runtime failure modes and behaviour. This is something I find difficult if not borderline impossible to do in Python, especially utilising the standard '10 layers deep' stack of numerical computing libraries.
Interesting, that I saw a similar pattern like "at first I complained, but as time went on I found some benefits" quite a lot.
It can be that your learn a language better, became more comfortable with the way it must be used: say, stopped writing code in Elixir the way your used to write in Python.
But the other thing is that it's in our human nature that we tend to look for something positive in bad situations we exposed to for a long.
Say like, PHP was a fractal of shit, but when you use it for a long you will notice that it will make you more aware of what functions to use and how not to fall into some undocumented craphole, be more responsible, and do not take it for granted that some function would work flawlessly. The obvious benefit from the shitty situation.
EDIT: Grammar
The nice thing about walking minefields is learning how to watch your step.
I like this because it could be said seriously or ironically
"If you're walking on eggshells, don't hop."
-- unknown
That's like the idea of planting a bamboo plant, then jumping over it every day. It's a nice idea, but you won't ever be able to jump over a two story house. Instead, you'll eventually catch your foot and tumble. Likewise with the minefield. The result, is that you'll eventually step on a mine. At best, you can use a simulated minefield as an exercise, then use mine detection equipment and proceed with caution.
Or, you can avoid the minefield entirely.
My enjoyment using any given language always tends to grow as I get more productive with it, even if I have a more general dislike for the language itself.
Making computers do things is fun (usually)! The programming language is (almost) immaterial - depending on the task at hand of course.
Despite myself, I've even found myself enjoying JS in the few times I had no choice to avoid it. shudder
That part about JS is pretty much the story of my career - I started out completely hating JS, was forced to use it enough to get more familiar with it, and eventually eventually started enjoying it to the point that it's now the primary language I write.
(I still hate it occasionally, but there's a lot more joy these days.)
The initial turning point was probably around the first release of jQuery...
[Javascript isn't actually that bad of a language. It really is about 90% of Scheme plus prototype inheritance, which takes some getting used to but ... eh, it's as good as any other option. The problem was that it was one of the battlefields between Microsoft and Netscape and has some really hideous scars in the landscape.]
I did not write this. My fingers did not type it. Nothing to see here. Move along. I'm a hedge.
>Javascript isn't actually that bad of a language
Well, the basic parts are not that great. Plus, it has almost nothing of Scheme (nothing more than e.g. Python has of Lisp), that's just an old wives tale. It just has closures and that's it. Scheme is a Lisp, whereas Javascript isn't [1]. Not sure what Brendan had in mind when he was inspired by Scheme, but the end product is nothing like it.
Coercion rules are crazily bad. No integer type is stupid. Prototypal inheritance nobody much cared for (where nobody = very few). An empty array is not falsy. And several other stupid decisions besides.
The only reason it wasn't that bad, is that it wasn't big enough to be bad. It was just a handful of features plus a tiny (and bad) standard library (some math, some string stuff, etc). Everything else people had to build on top (and usually the did it badly).
[1] https://journal.stuffwithstuff.com/2013/07/18/javascript-isn...
I think JS is very fun to write. It's making sure it does what I want it to do that is irritating.
>Interesting, that I saw a similar pattern like "at first I complained, but as time went on I found some benefits" quite a lot.
Isn't that also a classic example of "boiling the frog" or Stockholm Syndrome?
This. Plus folks are always comparing to their previous experience, which is likely to still be a subjective and also hard to compare at all too. It's like apples (Go) vs oranges (C#) vs broccoli (Python).
+1 for 'fractal of shit' - that is a keeper.
I read the "fractal" part as a hat tip to this rather famous article: https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/
What type of numerical computing do you do? By that, I mean what is the problem domain?
Mostly machine learning for distributed sensor networks (ie: smart meters). Deal with lots of time-series data, state space modelling, some recurrent neural nets (etc). Our shop has a 'golang only' thing going, which means that I end up having to reimplement algorithms in go sometimes from scratch.
nice area, I would be interested to learn more into this type of problems. Recently, I became for interested in power electronics / smart grids / energy, and looking for the ways to get in touch with people working on such problems, learn more and join any company on these domains.
It depends on what you want to do as this industry is HUGE. Do you care about metering and working at the solar panel or wind farm level? Or maybe on the actual energy markets that commit and dispatch all generation in a region? Or maybe the vendors that write the software for those markets? Or you could work for the utilities or IPPs that own the generators...or the state public service commission that control state plans. There is also FERC and NERC. There are companies that sell energy storage systems...the list goes on and on.
I saw on the Adacore website a success story for another company that sells smart metering products, so it is nice to see all the work in this space. There are plenty of major vendors as well with smart meter products and the accompanying software.
What company do you work for by the way if you can say?
I agree, the greatest thing about Go is all the stuff it left out. Which is still annoying sometimes coming from more expressive languages (no exceptions! no generics!) But thanks to the simplicity and the common format standard it is so easy to read and understand. If I want to know about an edge case of a library function, I just dig through the code in my IDE until I have the answer. In most other languages I'll hesitate to do that, either because it's hard to read the code, or hard to access it.
I rarely have to think much about how to write the code itself, just about the actual problem that I'm solving. Once I know where I'm going, there's really only one way to write the code for it. Reviewing and using a co-workers code is also a breath of fresh air.
I think languages fall on a spectrum with regards to both typing and expressiveness, and it's not good to be on either end (e.g. PHP vs Scala or C vs C++) the designers of Go were very disciplined in walking that line and struck a great balance. I'd hate to see that undone by turning it over to a committee which results in a million compromises that turn a language into a Swiss army knife of features. It needs that strong guiding hand and the discipline to say no most of the time. Go has become my favorite language, I just wish I get to use it more in my work.
I rarely have to think much about how to write the code itself, just about the actual problem that I'm solving.
Bingo. This is also what the designers of Smalltalk, Ruby, and Python were trying to achieve. This is the opposite of C++, where I find that I'm thinking about the how all the time. (And at least 25% of the "agile" process time is spent on this activity in explicit reviews.)
> The last thing I personally want to see is Go being handed over to the community to be designed by committee.
I'm thankful for exactly that. Go is developed by Bell Labs people, the same people who bought us C, Unix and Plan 9 (Ken, Pike, RSC, et al). They took the time to think through all their decisions, the impacts of said decisions, along with keeping things as simple as possible. Basically, doing things right the first time and not bolting on features willy-nilly simply because the community wants them.
Reminds me of Joe Armstrong's quote: '''I remember thinking “how simple, no qualms of conscience, no soul-searching, no asking "Is this the right thing to do” …''' source: http://harmful.cat-v.org/software/OO_programming/why_oo_suck...
> Go is developed by Bell Labs people, the same people who bought us C, Unix and Plan 9 (Ken, Pike, RSC, et al).
Exactly and it comes in line with other research languages, namely Newsqueak and Limbo, both relying on channels for concurrency. I hope their other work will also find their way into every day usage though.
Bell Labs — and ETH Zürich, where Robert Griesemer worked as a PhD student under Niklaus Wirth on Oberon, which had a big influence on Go [1].
The earlier languages that Rob Pike et al were involved with, Limbo and Newsqueak, were also heavily influenced by Wirth.
[1] https://talks.golang.org/2015/gophercon-goevolution.slide#15.
My experience is the same. I spent a couple of weeks playing with Go and had such an easy time with it. I can go back into the code I wrote and understand exactly what's going on. I can't say that about most languages I tried. I can even make sense of other people's code in large, complicated Golang codebases despite having little development experience.
This probably scares people whose livelihood depends on managing complexity in other languages. If anyone can get up to speed quick, anyone can potentially make the program that eliminates the need for that complexity.
I don't know that Google had this in mind while developing Golang, but they stand to benefit from commoditizing development. This is fine for someone like me who has zero interest in it as a career but does use a lot of scripts and plugins for creative work. Right now it's $10+ every time I want to do something with music/video/art where free or included stuff doesn't work or doesn't exist. If every DAW, video editing suite, and art/graphic design program had a scripting language as easy to use as Golang, I would never need to pay for add-ons.
The markets would still exist, but they wouldn't be as lucrative. Pricing would go from value to commodity.
> The last thing I personally want to see is Go being handed over to the community to be designed by committee.
There is a point about diversity to be made here. Different design models will each have their strengths and weaknesses, and the design spaces each opens up are not going to be fully explored if one model prevails. So I'm glad there's a language like golang with a coherent centrally-planned vision behind it in existence. It's also good to see more community-driven models get to do their thing. We'll see over time how each develops and what problems they best solve. It does seem to me to be a 'let 100 flowers bloom' type situation.
A bit of a bland centrist view perhaps, but with systems as complex as programming languages and their associated libraries, ecosystems and pragmatics, it's really hard to know what works. Best to experiment.
There is a point about diversity to be made here.
Strings in C++?
> Being told to use this opinionated formatter was like removing a 40kg bag after a bush walk.
This is how I feel about prettier.
> The last thing I personally want to see is Go being handed over to the community to be designed by committee.
I feel this way about many projects. You can’t beat good stewardship/vision, and sometimes it’s corpotate.
> Being told to use this opinionated formatter was like removing a 40kg bag after a bush walk.
I think the gofmt approach is becoming an unofficial standard. In the JS world 'prettier' has taken off and I think most languages now have a community anointed formatter (and new languages are likely to have an official one).
Yes, but they might be customizable. For example Rust has rustfmt but you can customize it. Which is a horrible approach imo.
Agreed. Prettier, BTW, is specifically low customization.
Strongly agree. Go feels great to write. I hesitate when designing a new project in C++ because I need to figure out the right / clever way to implement something. Go feels a lot more straight forward where I can just write it in the one way it's designed to allow. It gets me writing code a lot faster than other languages.
My experience is different. Believe it or not the type checker is actually inferior to Python + mypy. For example it is possible for variable to be nil, even when it is not a reference.
Types, it has int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64 and also float32 and float64.
If you use anything besides int, int64 or float64 you will just have a lot of fun casting.
math.max() and math.min() only work on float64, you can easily use it with different types, and actually it is discouraged to use it with integers. So you need to roll your own if you want to work with integers. But ok, it is math so when you do math apparently you should work on floats because that's what scientists do, but then why it doesn't work with float32? You actually have to cast like
result := float32(math.max(float64(left), float64(right))
If you want to convert a string to an integer, you have nice strconv.ParseInt() where you can specify among other things a bitSize, great, but the resulting type is still int64, you will need to cast it. What about other types, using them is a nightmare, if they were not supposed to be used why not just have int and float type?
If you try to implement a data structure that stores objects, you either have to duplicate the code, use interface {} (looks like that's most common, but then you no longer get help from a type checker) or use generator (this seems best to me, but it is just automation of the first approach).
I don't understand why Go is getting so much good opinions, it is not an enjoyable language to program. The supposed killer feature which were go routines and channels are kind of meh and nothing that you couldn't replicate in other languages. Seems like people like its simplicity, does that mean the other languages are overwhelming?
It's okay, I like elixir's take on things more and I question how suitable Go is for maintainability of large projects. But it's okay. It will improve as more developer tools and language features come online.
It seems to me that you’re going to have a lot more trouble maintaining Elixir code than Go code over the long run.
What makes you say that?
I personally don't see Go and Elixir's primary domains as being equal, or one a superset of the other. So there's some argument to be made about the region where they overlap, but for something inherently based around fault-tolerance and distribution, seems to me that code written to run on the BEAM will be smaller and clearer and therefore more maintainable.
If I'm interested in building low latency and highly available web services it seems to me that both Go and Elixir are reasonable choices. How are they not the same primary domain?
But in any case, Elixir allows a lot more clever code. In my experience working on legacy software, clever code in a dynamic language is error prone and hard to refactor and maintain. Static typing can help, though, I've found this especially true in functional languages where you have a super smart developer do something clever that's hard to understand 5 years later.
What about distributed programming? The BEAM makes that sort of thing super easy.
Maybe Go does have good primitives or libraries for that, and I'm just ignorant of that. :)
> I'm going to risk being labeled an ~incompetent dev~
Why is that?
I thought your comment made a lot of sense.
Because it is standard, in these discussions, for someone to quote Rob Pike to say that golang was designed for the lowest common denominator of developers.
Personally, I like go a lot for writing services and console applications.
A common HN trope is that Go doesn't have enough features (Generics usually) and gets in the way of '10x'/'competent' programmers expressing their genius - unlike Haskell (or some other advanced language for advanced minds)
This is just how I feel about clojure. I enjoy using a language with design tightly controlled by the people closest to its vision.
What are the advantages of having a committee involved in the design of Go? In the case of C++, based on the threads that I read on HN, I see people being unhappy about the decisions taken by its committee.
I appreciate a lot of Golang, but I really wonder how they'll do generics differently than others
It's funny I feel the opposite.
I feel Go's 'opinionated' language design insults my intelligence.
But this is not about Go's merits or disadvantages - this is about who is calling the shots.
And it is not the "community" - that is just decoy.
> learning golang was seriously a breath of fresh air
This is completely irrelevant to the topic. Go, as currently implemented, could be the bee’s knees for the whole world for all I know; it’s not relevant to the discussion. To sidestep like you did and pretend that this is some form of criticism of Go itself is disingenuous hijacking of the discussion thread.
wrt a "proper" way: adding item to a slice, uhmm?
A slice isn't an array. A slice is a view into an array.
You don't look through a window in your house into the backyard, and plant a tree in the backyard by fiddling with the window. It's the same with arrays and slices in Go.
If you want to insert an item into a slice, insert it into the array (by copying to a new array and adding your new element to it while copying), then creating a new slice which includes your addition.
edit: (adding for clarity) In a lot of programming languages, whether they use slices or not, arrays are of a fixed size and must be copied to a new array if you want to add elements. Some languages have some syntax that makes it feel like you are modifying an array in-place, while doing the copy to a new array behind the scenes.
edit-edit: for an implementation example of the above, see Java's ArrayList class: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/tip/src/share/...
I'm by no means a expert, but doesn't Go advocate using append with a slice which will create a new array? https://golang.org/pkg/builtin/#append
> edit: (adding for clarity) In a lot of programming languages, whether they use slices or not, arrays are of a fixed size and must be copied to a new array if you want to add elements. Some languages have some syntax that makes it feel like you are modifying an array in-place, while doing the copy to a new array behind the scenes.
It's that or some magic with larger-than-needed arrays that automatically grow by a bunch extra every time they hit their boundary to make appends faster, while blowing up memory use and making append performance unpredictable.
Lots of (especially) scripting language hide this behind automagic and you see tons of append-in-a-loop where it's not really necessary, as a result.
[EDIT] had insert two places I intended append. Me need coffee.
Yep, you're right, that's the transparently resizeable array thing, and it's exactly how Java's ArrayList class gives the feel of a resizeable array while it actually manages fixed-size backing arrays for you. That's why I linked the source to that class. :)
This is absolutely not best practice. It's perfectly idiomatic to insert an item into a slice (without the copy shenanigans you describe). The slice will manage the copy if necessary.
That's fine as long as you don't mind if the underlying array is modified. As the parent points out, a slice is a view into an array and there could be other views into the same array.
https://play.golang.org/p/goL1JtapY7q
Sometimes you need to care!
https://play.golang.org/p/goL1JtapY7q
I meant that go's append idiom to reallocate array behind the scenes seems to me somewhat complicated and error-prone.
append?
Security consultant here. I have audited many codebases in many languages. Go is by far the easiest language to audit: it always looks the same, it's not too verbose, there are no generics or OOP. Coincidentally it's always the most secure as well. My take on this is that it is easier to see logic problems because it is easier to read, understand and reason about. On top of that the standard library does so much for you with crypto and security.
Generics and zero-cost abstraction can and are concepts that are often abused instead of used when it makes sense. I hope thay Go will never support generics because I sincerly believe it might mean the end of the language
I'm curious, how will they end Go itself? My understanding is that they always planned to have generics, they just didn't make 1.0 and now the Go community has sort of adopted the lack of them like a badge of honor.
Also, I have no expertise in your field so forgive me if this is a stupid question but wouldn't generics be easier to check over since they allow there to be only one implementation of something for all types versus in Go, a bunch of different implementations of the same thing that you have to check and might have subtle errors?
> "My understanding is that they always planned to have generics, they just didn't make 1.0 and now the Go community has sort of adopted the lack of them like a badge of honor."
As soon as Go adopts generics the community will turn on a dime and pretend they always loved generics (and perhaps even invented them.) That's how these sort of things typically play out. See also, copy/paste on iphones. When only android had it, copy/paste was a misfeature for idiots. When iOS implemented it, copy/paste became the best thing since sliced bread.
See also: Java Generics.
Also see javascript and static typing.
I think there is generics proposal under discussion.
There is! As soon as it is implemented in the language, I will consider Go.
> it's not too verbose
that's a really puzzling statement to me, maybe aside from C, C++, and Java, (and maybe rust) Go is certainly on the "more verbose" side.
Pretty sure those are almost all of the most popular statically typed languages (absent C#, which has about the same verbosity as Java). I'm sure I'm wrong if you're willing to lower the threshold for "popular" enough.
I’ll opine that C# is much less verbose than Java in practice - language features like properties, reified generics, functional-like programming with Linq, tuples and its jilt-in support for async APIs mean you can be surprisingly succinct.
Agreed 1000%. Java is backward compared to C# - bolted on half baked features that are the opposite of elegant, a standard library that is built on the principle of 'why use 10 lines when a 1000 will do' and the tools/IDEs are a generation behind C#. As is the runtime speed.
I agree. It is very easy to learn and inspect. Sometimes I just look at function definition and it is immediately clear. This is so far the easiest to read language I've seen. I don't think generics has to be a problem if their use is very limited. I would like to see generics to work like code generators, something like using m4 to generate the similar type, sharing common methods but having a different core type in their implementation. It would be nightmware to have full C++like generics or even preprocessor macros.
Could it be because Go is newer and handles buffer overflows correctly?
Go already supports generics - the hashmap is generic, and the array is generic.
It's just you can't implement your own hashmap in Go, and make it compile time type safe.
> It's just you can't implement your own hashmap in Go, and make it compile time type safe.
Then it doesn't support generics. That is the entire point of the feature.
C also supports generics - just use void * !!
no, Go doesnt' support generics.
void * is the C's equivalent of interface {} it has little to do with generics although it can be used to work around lack of it.
Perhaps there are other ways, but in C you have parametric polymorphism functionality through macros. That's why min() and max() are not functions but macros and they work with any type that can be compared, not just float64.
The article uses "Google" instead of individuals names to make the actions taken seem like sinister actions of a faceless corporation.
My interpretation is that Google employs a tight-knit group of people that work on Go and collectively are the BDFLs of the language. This isn't that much different from most large OSS projects, although it does seem likely that this core team weights the opinions of those that they interact with daily (ie, other Google employees) over people they barely know.
The last two paragraphs of the article address exactly that issue - that it's hard to tell whether the direction of Go development is decided solely by the Go core team or by Google as a corporation.
Which is nonsense, and is equivalent in this case to "i never bothered to ask so i'm just going to assert some stuff that agrees with my viewpoint".
They could have just asked.
In fact I can answer this for you, since I was the relevant director (IE Go directly reported to me)
It was driven by the core team, and more particularly, the leads and what they want to be trying to do.
I have provided precisely 0% of the vision there.
Further, the org/etc they belong to has changed (a very small number of times) over the past 10+ years depending on the Go team goals, not depending on Google's goals.
(IE when their goals have changed, or the org goals changed, Google has put them in a place that aligns with their goals, not tried to align them to the goals of the org they belong to)
So, why is Google paying them? Is Google getting enough value out, or PR, or...?
People are stepping around trying not to offend anyone, but it is no secret that Go was created for Google to solve their needs, it supposed to be a simple language that even a fresh graduate out of school could pick it up quickly and work with it. It is also very opinionated for example regarding the formatting or (at least initially before vgo was introduced due to outside pressure) to work well only in a mono repo scenarios.
Google makes it free to use, so it will be easier to recruit people that already know it. Many companies do that as well, the difference is that still generally no one uses their languages. This one is different, because well it's Google. The Go itself isn't really that great language, but there's a lot of hype behind it. I wonder when it will die out, but I guess it will be a while.
Google is getting tremendous value from Go, on multiple fronts.
For reference, Go was meant as an alternative to Java and C++, to develop distributed systems. Given the direction that Java went, acquisition by Oracle and $10B lawsuit against Google, it is a well worth having an alternative.
Go team is actually incredibly open and approachable if you meet them at conferences. Google (as a company) has little influence on the language design design. Its needs has shaped design (obviously), but it's not like there are requirements outside of the Go team. Go is heavily used in Google, so it's a natural dogfooding process, but that's it.
"If you meet them at conferences" is a huge if of inaccessibility.
Haha, sorry, they're open outside of conferences too :)
I don’t really get this distinction. “Google as a corporation” isn’t a thing, it’s a collection of people, some of whom happen to be the Go core team. It’s likely that due to being close to Go developers at Google there’s a bias towards implementing features that would help those people, but I very much doubt the subject of genetics in Go comes up at board meetings.
Implementing features Google needs sooner, rather than later, is one thing, but at some point, Google-the-profit-driven-corporation's needs will contradict the broader Golang community's needs, at which point the question is who "wins"?
What will, on a long enough timeline, come up in board meetings, especially as Google fails to meet Wall Street analyst's expectations, and as the ad-tech space evolves, is how much of Google-the-corporation's money to continue plowing into broader community things, like Golang at all.
Hopefully, by the time that happens, the community will be strong enough to persist, and I use Golang professionally, so I have personal investment for that to be true, but the possible eventuality that it'll end up being in the situation Java is currently in, makes me nervous.
Google the corporation is represented by upper management, the people by developers. The question is if upper management at Google takes any influence on the development process.
...was this ever in doubt? i've always thought from day 1 that this is "Go as in made by Google". Even the name branding screams that
A useful mental exercise here: if the core team left Google, do you think they’d lose power?
The basic question is "Are they on the core team because they work at Google?"
- If someone new joined Google, would they immediately get added to the core team, with no history of contributions?
- If someone had a long history of contributions, but wasn't hired by Google, could they join the core team?
Those two questions are pretty determinate on whether this is a community project or a Google project.
- Probably not?
- Maybe?
A better question is, "Can you successfully act like you worked at Bell Labs in the 70s and 80s?"
I don't know the real relationship between Google and Go, but Go is very much a product of its current core team, who are (AFAIK) all veterans of Bell Labs (Robert Griesemer?) and all have the same Bell Labs approach.
Bell Labs invented Not Invented Here syndrome. You can know this to be true because there is no way this group of people could have the syndrome so bad any other way. The other side of the coin is that they are very, very good. You can know that because they do a lot of interesting, novel work without obviously (or obviously without) having looked at any other research in that field.
Take for example, Chris's comment, "(The most clear and obvious illustration of this is what happened with Go modules, where one member of Google's Go core team discarded the entire system the outside Go community had been working on in favour of a relatively radically different model. See eg for one version of this history.)"
The second link there is to https://peter.bourgon.org/blog/2018/07/27/a-response-about-d... From that: "[RSC and the dep-pies] discussed dep at the GopherCon contributor summit. Matt Farina says that [Russ Cox] said [he] could do better if [he] went off on [his] own and built something. ... The clear impression was that Russ wasn’t going to engage with the committee, the research [the committee] had prepared for him, or the prototype product of that research, dep — at least, not yet. The clear impression was that Russ had his own hypothesis about what a solution would look like, and that he was interested in validating those hypotheses, by himself. ... Russ decided to implement his ideas on his own, and make a proposal without us, and without telling us that’s what he was doing until it was essentially done."
This is what I'm talking about. If your ideas suitably mesh with their philosophy, they may be adopted. If they do not, the Bell Labs team will ignore them completely. And if they think the problem is a problem (and they don't, in many, many cases), they are quite capable of doing an end-run around you and producing a solution which satisfies their perception of the problem.
Go may or may not be Google's language. Go is the Go-lang team's language and you will go where they want you to go, to adapt MS's old slogan. To an extent, it's similar to Perl; one's success as a Perl programmer depends entirely on your ability to hold your mouth right and successfully simulate Larry Wall. Perl is not a DWIM language, it's a do-what-Larry-would-mean-if-he-wrote-what-you-wrote language.
[^1] I myself do not endorse the technical ideas in that post. "[Something] does not support using multiple major versions of a program in a single build. This alone is a complete showstopper." is true. The fact that most tools don't do it---and have to live with the resulting pain---simply means that it's hard, not that it's not necessary.
[^2] A previous discussion of Go modules here: https://news.ycombinator.com/item?id=17534923
For what it's worth I'm a semi-grey beard (20 years in) and I love golang. For me it was like going back to being 8 years old on my Commodore Plus/4 and really enjoying writing code again.
It needs close parenting. Java has been ruined by the push to include everyone's pet feature.
>everyone's pet feature
Isn't that C#? Java is very slow at adding new features, Java has only things that were proved to work in other languages.
> Java has only things that were proved to work in other languages.
But they still somehow keep finding ways to make them not work so well when implemented in Java.
C# may move faster, but its design team is also much more methodical about ensuring that new features have good ergonomics. In Java, I tend to feel surrounded by hacks that were hastily slapped on in an effort to keep up with C# and, increasingly, Kotlin.
Idk, have you seen the interfaces with default implementations in latest C#? Also duck typing? Both are mistakes IMO. First missteps I feel like I've seen C# make.
If by "duck typing" you mean dynamic, then I don't know what you're complaining about. It has a very niche set of use cases where it is needed. If people are abusing it then it's on them. There is no good or even alluring reason to use dynamic outside of it's intended purpose, so I don't feel like it's one of those "shiny hut dangerous" features you see in some other languages.
Fair point. I switched from C# to Java several years back, so I'm at least somewhat working from nostalgia for a certain point in time.
I look at the feature list in the latest iteration of the language, and my thought is, "Y'know, you really should stop when you're done."
What duck typing? Are you talking about the `dynamic` keyword?
Why don't you like default implementations?
Hmm, I do a lot of C# programming, including very language-y low-level stuff, and I'm not sure I completely agree.
I appreciate that by moving faster they get more stuff into more hands faster, but they definitely have a lot of hackish solutions with poor ergnomics outside of the narrow scope they were originally intended for.
If you will: the language features have a clear purpose but a general implementation; and outside of the narrow purpose the designs usually feel pretty poor.
E.g.:
- LINQ/expression trees don't support most of the C# language, and new language features are usually without equivalent expression tree. This isn't a full lisp or F# style quotation, but a pretty narrow window that's not easy to use outside of linq-to-sql style usages.
- LINQ trees are again intrinsically inefficient, since the expression trees compile not to a statically shared expression, but to a bunch of constructors (i.e. looping over even a medium sized expression is bound to be slow); and they're not equatable, so it takes a lot of effort for a consumer to detect this case leading to overly complex (and hard to reproduce correctly) hacks inside stuff like EF.
- LINQ is restricted, but the restrictions are fixed, not customizable. That makes it a poor fit for DSLs, including stuff like Entity Framework, because there are usually lots of expressions your DSL can't support, but there's no way of communicating that to the user. Also, if you use expressions as DSL, you need to follow C# semantics, which isn't trivial; witness gotchas in ORMs surround dealing with null and equality.
- lambdas are either delegates or expressions; not both, and this isn't resolved via the normal type system, but by special compiler rules, making it hard to do both, and leading to type inference issues such as that var f = (int a) => a + 1; cannot compile.
- Roslyn: very poorly documented, and ironically very dynamically typed to the point that many casts or type-switches are necessary but finding out what types there are and what they do is generally a matter of trial and error since the docs aren't great. Ergonomics are poor in other ways too; e.g. dotnet is xplat, but the build-api is not - i.e. it's clearly not dogfooded. Also: totally not integrated with expression trees, which is at least mildly surprising.
- string interpolations are unfortunately quite restrictive (compare with e.g. javascript, where this was implememented much better), and intrinsically and unnecessarily inefficient (at least 2 extra heap allocations, and usually lots of boxing, and the parsing the compiler necessarily must do is not exposed in any kind of object tree, but instead reserialized to string.Format compatible syntax necessitating re-parsing at run-time). Also, like expression trees, this was really hacked into the language, so, e.g. you can't participate in other normal C# features like overload resolution the way you might expect, extension methods plain don't work, culture-sensitivity can be a gotcha: basically this works for immediately evaluated expression, but is tricky elsewhere.
- razor (not strictly C#) is hugely complex, and has a very impractical underlying model. Compared with e.g. JSX which is trivial is (ab)use creatively, and which uses mostly language-native constructs for control flow, razor makes it impossible to use even basic features like methods to extract bits of common code; lots of basic programming features are reimplemented differently. Instead of passing a lambda or whatever, you have to deal with vaguely equivalent yet needlessly different stuff like partials + tag helpers.
- optional parameters are kind of a mess (no way to enforce named args, no way to cleanly wrap optionals, restriction on compile-time constant, interaction with overloads can be suprising); tuples are too (names are dealt with differently than everything else in the language, no syntax for empty or 1-elem tuples, no way to interpret arg lists as tuples or vice-versa, no checks on nasty naming errors like swapping order); equality is a mess (how many kinds are there again?), lots of apis are disposable but should not be disposed but for others it's critical, no good way to compose disposables, huge ever expanding api without practical deprecation path is a pitfall for newbs, no partial type inference for generics, no unification of all the various func-and-action variations means billions of pointless overloads (and sometimes per-API ways around it); tuples and anonymous objects are sort of redundant, but not entirely; no good way of implementing equality/hashcode/comparability and yet easy way to detect misused non-equatable types.
I mean, I respect their choices here, and there's a tradeoff with lots of benefit too: they're really quite fast-moving, and I want those new features ;-). But it's not without costs; they definitely aren't "much more methodical" or anything like that.
Uprated because you gave lots of specific examples. Too many conversations get vague fast.
Thank - I hope I don't come across as too bitter - I really do think there's an upside to all those limitations. I'm just past the exuberance of thinking that because it's so actively developed, that all these flaws are eventually fixable. It's a fast lifecycle, and probably at some point it'll be too impractical to continue as is, and then we'll just jump ship to some slimmed down alternate with a good transition story - and that's fine. So far: so good.
C# has made some serious mistakes: reified generics (which has basically destroyed simple language interop on the CLR and makes it an unattractive target for language implementors), and recently, async/await. Both of these help in some ways, but have costs that are higher than the benefit and much better alternatives.
Java is not trying to "keep up." It is intentionally slow-moving and conservative (this design goal was set by James Gosling when Java was first created), and only adds features once they have been proven in other languages for a while.
As former Java dev, returned into .NET world, I don't consider it a mistake, the CLR was designed with multiple languages in mind, and there are plenty of options available, even if Scala devs failed at that.
On the other hand, what I consider a major mistake from Java side was ignoring value types and AOT compilation since it's inception.
Had Sun blessed such features since the beginning, and many use cases for C and C++ wouldn't be necessary.
Value types add complexity and they weren't necessary in 1995. They only became necessary due to hardware changes circa 2005. Similarly, AOT compilation has only become attractive for the kinds of applications people use Java for only recently, when startup time became important for serverless. The lack of neither has caused Java lasting damage; what has is the domination of the browser on the client, but that has affected all languages.
As to baking variance into the runtime, I think this is just a bad idea, which is so far used only in C++ and .NET, two languages/runtimes with notoriously bad interop (it's not just Scala; Python and Clojure have a similarly bad time on the CLR, as would any language not specifically built for .NET's variance model). It is simply impossible to share a lot of library code and data across languages with different variance models once a particular one is baked into the platform. This is too high a price for a minor convenience.
Specialization for value types (which are invariant), is another matter, and, indeed, it is planned for Java. Perhaps some opt-in reification for variant types has its place, but not across the board. I am not aware of other platforms that followed in .NET's misplaced footsteps in that regard. Those that are known for good interop -- Java, JS and LLVM, don't have reified generics.
What's worse is that it's a mistake that cannot be unmade or resolved at the frontend language level. Even Java's big mistakes (like finalizers, how serialization is implemented, nullability and native monitors) are much more easily fixed.
Value types aren't "necessary", but they would have been valueable at day 1. The GC heap is simply inefficient; not necessarily because of the GC (which indeed is harder with massive multicore), but simply because of the per-object memory overhead.
There's a reason java had built-in value types from day 1, because it made sense even back then.
Frankly, I think both java and C# kind of got this wrong. There was an overreaction against the C/C++ of the day, and whereas the GC turned out brilliant, the idea that it's not even necessary to express the notion of references/pointers/values etc. was too much; and the idea of a single type system root (object) is similarly dubious, and then particularly the idea that that root type isn't the empty type. Object has semantics, and that was a mistake, because it contributes to the bloat. I'm totally happy with ignoring those features 99.9% of the time, but having them completely unavailable makes those 0.1% cases extremely expensive. (I mean, I think those things are slightly changing, but it's slow going).
> because it made sense even back then.
That was necessary for performance back then. User-defined value types weren't, and Java has done well without them.
> Object has semantics, and that was a mistake, because it contributes to the bloat.
I think most of the RAM bloat is due to the GC trading off extra RAM for speed rather than object headers, and I'm not sure trading off complexity for headers was right 25 years ago (JS is doing fine on the client without value types). What changed was the performance characteristics.
As to object semantics, it may be a fixable mistake. The goal is to get value types without today's object semantics while preserving a single class hierarchy at the same time. The Valhalla team thinks that's achievable.
Heap (over)use by the GC is effectively a scaling factor. How large the underlying objects are remains remains relevant: if your objects are twice the size necessary, the GC will "bloat" that further - and this tradeoff isn't entirely GC specific, other allocators such as those used to implement malloc/free have related tradeoffs to make; free() won't release memory to the OS either (and memory, released or not, may end up evicted from RAM anyhow).
Of course it is relevant. I'm just saying it isn't the decisive factor that makes this an absolute necessity, as evidenced by the fact that much the backbone of the largest software services in the world is Java. There are lots and lots of tradeoffs in runtime design, and it's important to look at the whole rather than at one decision in isolation and point out that it's important. As a whole, the criticality of value types for Java is relatively recent.
I advise you to read Mesa/Cedar report on the impact of garbage collection algorithms available at Xerox PARC bitsavers archive, EthOS or SpinOS experience with Modula-3.
All of them refer that having value types alongside GC had a relevant impact improving performance.
All systems designed before Java was a thing.
Or since you refer to JS, the paper about SELF's design.
Even Dylan was designed with AOT/JIT and value types support, which is relevant here given that its domain was being a systems language for the Newton. That politics killed it is another matter.
Oh, I don't deny that value types would have helped performance back in '95, just that they were absolutely essential for Java. Smalltalk/Self and Scheme/CL didn't have them, and those were probably Java's greatest influences; I don't think VB had them, either. Also, in its first four years, before HotSpot was ready, Java was interpreted, so it had bigger performance problems.
I don't know why there was no emphasis on AOT back then. I guess they started with interpreter/JIT, and then there just wasn't much demand for AOT until now.
Microsoft Basics have support for value types since MS-DOS.
QuickBasic supported value types and AOT to native code, and while Visual Basic used P-Code, version 6 introduced a proper AOT native compiler.
Modula-3 was also a big influence, at least accordingly to some papers.
There was surely demand for AOT, given that most commercial JVMs had it in some form or the other since 2000.
Even Sun actually supported it in Java Embedded variant for OEMs, probably grudgingly.
Common Lisp certainly has support for value types.
What user-defined value types did CL have in '95? Also, are you sure about VB having had them then?
As for AOT, there may not have been sufficient demand from Sun/Oracle. I only joined relatively recently, but we generally do expensive things only if we believe they have a huge benefit or in huge demand, and we believe it can be long-lasting. The assumption is that any new feature will require maintenance for 20 years, taking away resources from other things. So if something is expensive, even if it's cool or some people could find it very useful -- we don't do it. The assumption is that the ecosystem is large enough that others can, and will.
Arrays, structs, fixnums, explicit stack allocation.
I can check the respective manuals if you wish.
Yep, I did VB programming for a short while.
And please note that even though my focus is now elsewhere, Java is one of my favourite eco-systems.
As a peasant I just wished that Java 1.0 was more like Go, given the existing alternatives back then.
So it kind of stayed as a pet peeve of mine.
Same applies to .NET, just in a different way.
Well, we can disagree about when AOT and value types became critical for Java (and I would argue that they clearly weren't back then because Java has done spectacularly without them), but Java is getting both soon.
I would say that it had other factors that contributed to its sucess, so it succeeded in spite of lacking those features.
However due to the hardware architecture changes and new kids on the block, it is starting to be an issue.
I keep wishing to see them arrive, have watched all the JVM Language Summit, Devoxx and JavaONE talks about them.
Meanwhile I can already enjoy them elsewhere. :(
> Meanwhile I can already enjoy them elsewhere. :(
That's perfectly fine. We think that our priorities are right for the workloads Java is used for (e.g. people care more about a low-latency GC like ZGC, and deep low-overhead in-production profiling, like JFR, than about AOT).
C# got it right from day one, regarding value types.
AOT compilation not so much, given the NGEN constraints.
However they got it both wrong, considering CLU, Modula-3, Delphi and Eiffel are considered influencial languages on their design.
Mutable structs are not exactly value types, but Microsoft has always preferred control over simplicity (after all, they pushed C++ really hard). I won't say whether that philosophy is right or wrong, but it is very different from Java's.
In 1995 I was enjoying Oberon, Component Pascal, Eiffel and Delphi.
Value types were pretty much obvious as necessary.
More so when one dives deep into how languages like CLU and Mesa/Cedar were designed.
Having a AOT support doesn't preclude having a JIT as well, like Common Lisp or Eiffel already had in 1995.
There's a difference between useful, and even very useful, and absolutely necessary. Clearly value types weren't absolutely necessary, as Java did well without them (and JS still does).
Gosling said that his goal was to have nothing you can somehow live without (I don't know how well early Java lived up that ideal, but that was the ideal). Hardware changes made user-defined value types absolutely necessary for workloads Java wants to target.
Being there since the beginning, I wrote my first Java game in 1996, early Java only did well thanks to Sun's marketing weight and it being free beer vs the alternative of having to pay for a compiler like Delphi.
I was around, too, and I don't think that was at all the full story. Marketing has never been solely responsible for the long-term success of any product. There were other languages that were very heavily marketed: VB and C++ (and FoxPro, too, I think) by Microsoft, Delphi, and about a million other RAD tools. Being free was one of the reasons, but so was targeting the web, and Gosling's design of wrapping a JVM that gave people what they wanted (dynamic linking, fast compilation, garbage collection) wrapped in a language that felt familiar and non-threatening. I don't remember what were Delphi's issues, but a big project I wasn't involved with at the same organization I was working at circa 2002 (I was all C++ back then) did Java on the server and Delphi on the client. Maybe Delphi didn't have a good server-side story?
It sure did, as long as you were a Windows shop.
No need for Windows. There was an official Linux implementation back in the day, codenamed Kylix.
Kylix was full of issues and was a mismanaged product variant, largely ignored.
If I recall correctly it even depended on WINE.
> and native monitors
Are there any plans to fix those?
Some ideas; nothing concrete. Need to figure out cost/benefit.
async/await is fantastic and pretty much the direct inspiration for the exact same feature om ES6, where its a godsend.
C# i s one of the best dev experiences in any language/IDE
Async/await is fantastic compared to not having anything at all. It's a big downside compared to other things you can do (cf Go, Erlang), and hard to get rid of. It's the classic case of getting easy short-term benefits at the expense of long-term costs. It's main benefit from an implementor's perspective is that it's better than nothing and very cheap to implement quickly. Just as .NET has lived to regret reified generics[1], it will live to regret async/await.
[1]: Maybe not C# programmers, but there are easier ways to do a single-language runtime.
The other main alternative to `async/await` with the Promise<T>/Task<T>/future<T>-paradigm is Rx's Observable - but let's not pretend that because Observable<T> is capable of handling every situation that Promises can doesn't mean we should use it everywhere - Angular tried that when they changed their HTTP client library to use Observable<T> instead of Promise<T> because they wanted to expose retries and other nifty logic - but in doing-so made the learning curve a vertical brick-wall for everyone involved (and now we can use Promise<T> with support for retries and better error-handling anyway) in addition to adding a very hard dependency on a fast-moving project (e.g. Angular 6 comes with a load of RxJS compatibility shims because RxJS radically changed their API design (again)).
Go's goroutines seem okay - but I don't like how much control they take away from the programmer. For example, last year I worked on calling-into a black-box C DLL from a Go program and we learned the C DLL had code that was actually simply terminating the thread inside of it (by design!) because the author of the C DLL assumed ownership of the thread. That caused a problem for us because Go's goroutines are scheduled by the Go runtime and it will never let you give-up ownership of a Go thread - and I couldn't see how I could use my own thread (e.g. getting a thread from a native OS call to keep it outside of Go's control) with goroutines. The project was almost DOA after we learned this, fortunately we convinced the author to always return instead of killing the thread. I'm not sure if anything's changed in Go since then that would have made things easier for us. But since then we haven't used Go for anything new. The only reason we used Go was because it gave us binaries that "just worked" for Windows, macOS and Linux without having to worry about Java, .NET and other dependencies - but I wasn't happy about the ~20-30MB-sized executable output.
What we're doing in Java is letting you choose, for each sequential computation, whether you want a heavyweight (kernel) thread or a lightweight usermode thread (like a goroutine), and if you choose the latter, you can use your own scheduler (schedulers are written in Java, and aren't a part of the runtime). No promises, no observables, no async/await, and no thread control issues.
C#'s language is much better designed IMO. Can anyone compare LINQ and Java's streams and not pick LINQ? Feels much sloppier in Java and Java came second.
Yes, I personally prefer streams, LINQ seems to me like mixing SQL in C# and that feels wrong.
That's probably what I like most about it. But that aside, the naming of tasks seems much more consistent in C# than Java. Java already had streams and maps and mangling those names makes searching for documentation a pain.
I do like all of LINQ's extension methods, but not the syntax myself.
> It needs close parenting. Java has been ruined by the push to include everyone's pet feature.
Oracle is moving to a faster cycle of development. There are some of us who strongly feel that some of their decisions are based less on what's best for the language and more on catering to the popular-and-loud crowd. I'll never forgive the addition of `var` to the language.
That this very thread exists suggests a certain “C++ ification” that happens to languages.
I really respect the slowness of the go maintainers in adding new stuff. I also suggest that we all ponder our tooling some; Writing java with emacs or vi is a materially different experience than using eclipse or idea and var style type-inference seems almost silly with those tools which do it for you.
>Writing java with emacs or vi is a materially different experience than using eclipse or idea and var style type-inference seems almost silly with those tools which do it for you.
It's not so much the extra typing that's the problem, it's the extra reading. All the stuttering is visual noise.
This. If you want to revolutionize the profession, come up with something that helps with reading as much as modern IDEs help with writing. My answer is that boilerplate should be generated somewhere else and largely ignored.
IMO, boilerplate source code shouldn’t be generated at all — the tool chain should directly emit the required object code. And code generation shouldn’t require a different language — or special comment syntax.
Depends on the toolchain. Everybody knows how to generate ugly source files, but it takes more effort to add AST nodes during compilation (or symbol table entries with types plus object code) and might lead to errors nobody understands how to fix (because you can't read the declaration of the thing you're trying to interact with).
>I'll never forgive the addition of `var` to the language.
I'm inexperienced with Java and didn't know this existed until I saw your post. It seems like a nice shorthand to me. Can you explain why you don't like it?
Misconceptions mostly. Java developers are some of the most conservative developers around.
And there you have the answer to why Java hasn't evolved that much, or when it did, why it needed to care deeply about backwards compatibility at the source level. It's because Java developers want it that way.
The irony is that people are now abusing "aspects" and "dependency injection" via frameworks like Spring that bring everything but the kitchen sink, but then the language becomes effectively dynamic, as via those annotations all static type safety goes out the window.
Therefore I find it interesting when Java developers complain about Var, because the ecosystem has in my opinion bigger problems. Compared with annotations Var isn't a problem because Var is statically checked, so here we have a clear case of missing the forest from the trees.
> Java developers are some of the most conservative developers around.
You're right, there are loads of conservative Java developers. It's one of the the things that makes me love using the language.
> The irony is that people are now abusing "aspects" and "dependency injection" via frameworks like Spring that bring everything but the kitchen sink, but then the language becomes effectively dynamic, as via those annotations all static type safety goes out the window.
> Misconceptions mostly.
But drop the strawman argument and borderline ad hominem. It'll do you better.
Spring is terrible in that sense, and you do find professionals arguing for Spring and strongly typed language. That said it's not an argument I've ever heard before being part of a Spring centric shop.
What ad hominem? That Java ecosystem is wholly dependent on crazy amount of magic is not a personal attack, really, but a mere sad admission.
I'm of the mind that it is un-Java like. Whether or not there is a "Java" as a philosophy is not the hill I'm trying to die on.
Consider these contrived lines of code:
```
String first = someMethodCall();
var second = someMethodCall();
```
The first provides more useful information at a glance. I don't see any value in the "nice shorthand." Typing out "SomeStupidClassName" has never once been a material bottleneck in my 15+ years of programming, but now we have this new option that caters to the lazy, and in doing so makes life harder. Now I have to either ban it, embrace it, or come up with some ruleset around when you can and can't use it. Why? Someone can't be bothered to type a few extra characters.
It reminds of my grandfather, a former professional ball player, but one who played back in the days where there weren't these multi-million dollar celebrity ballplayers pissing and moaning in the press about just how hard their life is. He used to call those types "high-priced cry-babies," and I really feel a tinge of that in dealing with folks who just wholly embrace `var` and give folks like me shit for having criticisms of it. Perhaps that's just my old blue-collar showing but your convenience in writing a handful of characters simple will never enter into my considerations.
I love using Java, I love the addition of things like streams, the Optional type, etc. My sibling comment is a little right, and very wrong. Lots of Java developers have a certain conservatism about them, I'm mostly certainly one. But there are large reasons to hate it.
> I don't see any value in the "nice shorthand." Typing out "SomeStupidClassName" has never once been a material bottleneck in my 15+ years of programming
There are a few rather glaring spots that I've noticed.
First, when you're refactoring, you've now got to edit every spot where a variable of that type is created. At the very least, when you're just renaming a class, your IDE can help you, but you still create a lot of diff noise. At worst, when you're splitting up a class or otherwise shifting responsibilities, you may end up with a whole lot of yak to shave. This is not just an annoyance; it's a latent code quality problem, because it creates a disincentive to clean things up.
Second, I've seen it become an impediment to writing clean code in the first place. I have encountered situations where it's clear that the author wrote
because creating intermediate variables would have meant having to type out (and burn precious screen real estate on) some ridiculous set of 60-character generic type names.I've even seen it result in situations where data gets copied or otherwise processed excessively, because the explicit type annotation resulted in an upcast that shed some useful feature that a subsequent developer shimmed back in because they trusted the explicit type annotation and not a function's actual return type.
So yeah, I decry your assertion that this feature is about being lazy. This feature is, at least for me, about code quality.
It would appear that all of your code quality arguments are "people are too lazy to actually write good code." So it's not clear why you would decry that assertion. I don't have a horse in the race one way or another, but you're not refuting mieseratte's objection.
A tool that encourages you to shoot yourself in the foot is a bad tool.
If a language encourages bad patterns by making good patterns overly verbose, the language should address that.
If explicitly naming the type is better, then do that. I find it useful for cases such as:
Versus:To quote another comment from this thread:
> [D]rop the strawman argument and borderline ad hominem. It'll do you better.
Sure, in a contrived example where the return type is not obvious, you perhaps shouldn't use var. What about real world examples where the type is more often than not obvious?
I mean go also has var for very similar reasons as java has it.
Generics was introduced in Java in 2004 with J2SE 5.0[0]. [0]: https://en.wikipedia.org/wiki/Java_version_history#J2SE_5.0
Generics should have been in Java (and Go) from the beginning.
Those surely are not the proof that Java adds "everybody's favorite feature". I think the parent means the newest Oracle projects (Valhalla, modules, value types, streams, and so on).
(Technically, I think Java modules have been floating around in weird, likely broken suggestions since before Oracle bought Sun. As far as I could ever see, the primary design constraint was always, "NOT OSGi!")
The concept of generics/parametric polymorphism has existed decades prior to that in languages like SML and proven to work rather well.
Every time I get to edit pre-java 5 code is a reminder how useful generics actually are.
Generics were added so late because they had to figure out how to do it properly, correctly, on the first time.
And they failed. They did the best they could, but the fact that they were added onto the language later (plus the desire for backwards compatibility) means there are lots of gaps and warts in what they wound up implementing.
And the hacks to work around them started rolling in quite quickly. For example:
http://gafter.blogspot.com/2006/12/super-type-tokens.html
At its root, the real problem here isn't "reification good" vs "reification bad", per se. Haskell has an excellent implementation of generics, and erases types far more aggressively than Java does. C# also has a very good implementation of generics, this time based on reification.
The problem is more that Java's particular mix of design decisions resulted in a language that operates at cross purposes with itself. Once upon a time, back in the beginning, Java was a reflective language. Being reflective requires type information to be available at run time, though. When Java decided to use type erasure in its implementation of generics, they created a really bad set of interactions: They kneecapped reflection, so now you can no longer call Java truly reflective; it's only partially reflective. You can no longer effectively and accurately reflect on what have come to be some of the most-used classes in the language. And, at the same time, they forever sealed a rather important corner of the type hierarchy off from generics. They also delayed a bunch of type checking until run time - after types have been erased - so that certain things can just never be made to cleanly type check. Meaning you also can't say Java is any more than partially generic.
IIRC, the choice was between Java 5's version of erasure or drastically modifying the JVM, with the likely result that the new Java would be incompatible with the old Java. (Like C# has done.) This was considered unacceptable at the time. (Unlike C#.)
Erasure is important in allowing interop from other JVM languages. Reified generics would be nice from the perspective of just writing Java, but the interop story on the JVM is one of its best selling points.
Generics in Java are giant hack from the early 2000’s to maintain backwards compatibility with 1990’s-vintage JVMs. C#’s generics we’re done right.
I'm of two minds of that, these days. I came from C# so of course reified generics were of course better, of course--but these days I would rather have them in Java more and in C# less. I often find myself wanting to write the moral equivalent of `IFoo<?>` in C# and end up having to have two separate interfaces, etc. just to have a way to handle a list of a thing that I end up working on in an abstract manner. (Though I'd caveat that that is more of a gamedev-related concern than in Java/Kotlin, which I write for work.)
I do appreciate, though, that when Microsoft decided to do generics for C#, they did so decisively. These days, when C# gets a new feature, it seems like it's the complete opposite of decisively delivered.
You mean like this, right?
I don't mind that. It can even be an aid to organization - all the generic stuff goes in the generic class, all the stuff that doesn't rely on that can go in the base class. But it would be nice to use something like <?>. Too bad generics don't inherit implicit casts, like A<int> to A<object>.I do mean that, and I do mind it a lot when I'm so used to just being able to erase the generic.
There are performance implications to type erasure, to be sure, but when our computers are mostly all future machines from beyond the moon, I'm more interested in minimizing the impedance between my brain and a solved problem.
This is the only non-bad consequence of type erasure I'm aware of. On the other hand, a lot of code I've written in C# would be impossible or severely hacky without type retention, like "new T()", "T is Thing", finding all classes that are derived from T, etc.
TBH, I'm just as happy passing a factory method in for that sort of thing. Because it allows you to do both and pick the one that makes the most sense for you in a given situation.
"Java has been ruined by the push to include everyone's pet feature."
Care to expand on this? Java is very careful to release new features.
It's not about being careful (they are--but always with the baggage of backwards compatibility), it's about not having a soul.
Java has made a U-turn in adding streams and related functional features on top of a language that used to be strongly for OOP (actually defining the meaning of OOP for a generation of developers.)
These different paradigms together make for code that does not read the same no matter who writes it.
I love how Go code usually ends up being extremely similar, no matter who write is. (Actually, Kubernetes is a counter example for this: Go should have gone even further in forcing style.)
If you think that "all code reads the same" is detrimental to developers, you are conflating the idea of developer (problem solver) to that of coder (keyboard typist.)
> Java has made a U-turn in adding streams and related functional features on top of a language that used to be strongly for OOP (actually defining the meaning of OOP for a generation of developers.)
So, by adding a feature which works extremely well with OO and enhances the language they have no soul? That doesn't make any sense. Javas soul is being a blue collar language. It leaves the experiments to other (JVM) languages and takes the parts which have been shown in the field to be useful for many cases.
Go on the other hand is a half-finished Java, produced for the sake of saying "We are Google, OF COURSE we have our own language".
I think what dullgiulio is getting at is when working with large, code bases and teams Java's multiple paradigms leads to more difficult code to maintain. I've been experiencing this on my current project and its completely new development.
One package is written in the "new" functional style, another is written in "old" object oriented style, other parts use classes for nothing more than name spaces to house static methods. The reality is it's already a mess.
Java is a blue collar language in the eyes of people interested in language design and to be fair, some of them recognize its merits. The majority of programmers care more about their domain in which they are working and less about the language used in that domain.
"Java, from Bell Labs"
That was Limbo actually, so it goes well with Go.
How do you feel about streams/lambdas with Java's checked exceptions?
This is absolutely the biggest wart in modern Java. We now wrap all methods that can throw checked exceptions to rethrow unchecked. This works well for web services that can just retry or rollback and report an error to the client on transient exceptions, but is insufficient for many applications.
> Java has made a U-turn in adding streams and related functional features on top of a language that used to be strongly for OOP (actually defining the meaning of OOP for a generation of developers.)
But isn't that just an inevitable outcome of aging? The only way to never age is dying young.
And practical use of Java had stopped being textbook OOP (which means using classes to model the world and putting your logic where the data is) long before the streams API etc came along. Actually the shift away from textbook OOP integration between logic and data already started when the beans pattern drowned us in getters and setters, which already happened when Java generics were still the Pizza language sitting as a draft on Odersky's desk. I even suspect that lack of generics was a confounding factor in that shift, because when you look at a pre-generics codebase (which I happened to do just yesterday) you will find that the brave men and women back then spent an enormous amount of boilerplate just for hiding untyped collections behind typed facades which is a form of using OOP features (meant for modeling the world) for program structure, which is basically what post-textbook-OOP Java is all about.
I have no idea what having a "soul" means. But I suspect that my career has been well-served by being dead inside.
> I have no idea what having a "soul" means
A design philosophy. Some sort of measurable practice that influences design. This is not limited to Java. Design by committee has been destroying the maintainability of many languages.
"Some sort of measurable practice", not anything remotely like a "soul", in fact invoking "you have no soul" usually means you lost the argument and are now desperate enough to say anything.
> streams and related functional features
Streams are inspired by some idioms of functional programming. But they are not functional, and they cannot be made to be functional, because it is impossible to evaluate one without causing a rather glaring side effect.
> I love how Go code usually ends up being extremely similar, no matter who write is.
I want expert code to be concise and uncluttered. When everything reads like a novice wrote it, that's a problem.
Hey ! Plus/4 ! Finally someone who knows it too :) Since it was missing the C64 sprite abilities it made it even more interesting to code something cool on it. Though at least it could play some games like Ricky Rockman. :)
semi-grey beard here, too. Disagree about Java - it is stagnating because it hasn't kept pace with language innovation.
Aka catching up with Common Lisp and Standard ML.
> For me it was like going back to being 8 years old on my Commodore Plus/4 and really enjoying writing code again.
Your comment is the problem with the Go community. I have seen a number of comments from Golangers that they want a "fun" language that helps them reminisce about the past. They also want to write a lot of senseless boilerplate because for them more typing is somehow about them reliving their past. And tracking down nil pointers... and writing containers for every concrete type.
The simple fact is software development has gotten more complex because business requirements have changed and Go does a poor job of addressing that with its limited feature set. The rest of the programming world has accepted that we need better tools whether it be toolchain stuff or language features. Hate on Java all you want but at least it, like most other non-Go languages, has realized the need for better tools in the toolbox.
> we need better tools
Sure, and the "right tool for the right job" is still my mantra. Go is a very good at solving for an incredible amount of tasks in many problem spaces. Users will always want to bend tools to work in new places, and that's okay -- sometimes it isn't a fit.
Have some business requirements that make using Go a chore or a pain? Use a different language, or restructure the requirements.
> Have some business requirements that make using Go a chore or a pain? Use a different language, or restructure the requirements.
That's such a cop out answer when our industry is basically doing fad-oriented engineering. It's great when you can greenfield build a project but when you're taking over a project the selection of language has usually been decided. Or you know the management team has decided for hirability reasons to use X. Or just legacy requirements don't match reality eventually.
This is why we need more expressive languages than Go. Requirements change over the course of a projects life and what made sense in year 1 rarely makes sense several years later.
So did the mess that C++ become.
Google are bad at building developer communities (or at the very least they don't care about it much). They build things for themselves and their way is the Ivory Tower way of running a community. The overlords decide and their word is set in stone, the plebs should just accept the fact that their concerns and use cases are just too trivial and they should listen to the smart people at google and do it their way, which is the only way.
This isn't the first time google open sources internal tools, trying to build a community but really ignoring them completely. GWT, Closure compiler and of course Angular comes to mind.
Angular built a great momentum and community, and the angular team at google basically ignored most ongoing concerns to work on their next big project that'll fix everything (first it was Object.observe, then dart then angular 2).
Contrast google's handling of Angular to Facebook's handling of React (and react-native) - The routinely incorporate community influencers into the core team, they include other major corporations in their decisions and community, actively engage in developer relations to get feedback, explain controversial choices and build a community.
Sun's model with Java is even more different - incorporating major stakehodlers in the language into the actual decision process via the JCP.
Of course Google isn't the only ones who are bad at building developer communities around their open source. Apple and Amazon barely even try.
If google is Ivory tower, Microsoft is the Herbalife way of building a developer community - actively supporting influencers, providing official seals of approval and using a top down hierarchy of advocates. They do listen to community input a lot more, but Microsoft is still the overlords of all their projects.
Saying that they have failed to build a community around Golang is totally wrong
I'm so tired of people purporting to speak for "the community" - especially when they diametrically oppose my own views. It feels a lot like they are co-opting me for their own agenda while simultaneously excluding me.
The things mentioned as evidence that community doesn't matter have a lot of buy-in from the community. Modules in particular are an effort that - at least from what I can tell - is heavily driven by non-Googlers (in particular Rog Pepe, Paul Jolly and Daniel Marti are people who put a lot of work into making modules actually work for practical workloads).
These kinds of pieces only make sense if you have an extremely limited view of who is or is not part of "the community" - in particular, if you throw everyone agreeing with the Go team out of that bucket.
Haskell is an excellent example of a community-driven language. It's more mature and advanced than most commercial offerings too, offering a superior type system, fast and efficient executables, lightweight fiber concurrency, software transactional memory, higher-kinded parametric polymorphism and many more features.
It's interesting that you bring this up, because I'd consider Go and Haskell as almost polar opposites. Go is a simple language which lacks expressibility but with strong opinions on almost everything from formatting to architecture, which leads to a streamlined (and refreshing) developer experience.
Haskell is a complex language, with an expressive type system giving you more tools and guarantees but I would call the learning / dev experience everything but streamlined.
I wonder whether the difference in the organisational structure (single entity vs community) manifests in the characteristics of these languages.
I think a big part of the success of Haskell is down to its language extensions. New features are introduced as off by default and can be opted into. This allows all kinds of crazy features to be introduced without really impacting users if they don't want to. It does allow the community to be quite experimental without fear of destroying things.
Haskell also has the motto "Avoid 'success at all costs'."[1] What that means is not that they want to fail at the things they set out to do, but that they want to ensure the language is never in a position where it's so important that certain behaviors or code be kept exactly the same because there's too much code that depends on it in the wild that they can't experiment with some new interesting feature in the next release. It is fundamentally an experimental language; while it is used for certain production applications, there's a sense in which the Haskell community+language simply can not ever become a top-tier language, by the community's design. If anything like Haskell ever does get into top-tier status, it'll be something that claims Haskell as a parent, not Haskell itself.
[1]: Edited, thank you maxiepoo.
The "motto" (if it can be called that) is not "avoid success", it's "avoid 'success at all costs'" which makes the sentiment clearer: increasing adoption should never be a priority over principled design.
Interestingly, the motto can be parsed two ways, with different connotations.
"Avoid (success at all costs)" or "(avoid success) at all costs."
they want to ensure the language is never in a position where it's so important that certain behaviors or code be kept exactly the same because there's too much code that depends on it in the wild that they can't experiment with some new interesting feature in the next release.
Squeak Smalltalk had this. "Burn the diskpacks!"
You can opt out of using those extensions in your own code, but many of them are now deeply embedded in the library ecosystem.
Indeed. It's like a feature democracy where each library gets a vote on which features it finds most useful. Those that then get deeply embedded are clearly those that are most useful. Those features that aren't particularly used don't really get anywhere.
I sort of agree with that, but features often have externalities. For example, let's say I choose to use lambda case because it makes some of my code a little bit more concise. From my narrow point of view, that seems like a win. But then it's one more piece of syntax that external tools have to deal with, one more barrier to anyone trying to develop an alternative to GHC, one little piece of additional complexity to throw off Haskell newbies who want to read my code, etc. etc.
True. Those are all good points. I do still feel the upsides more than make up for it, but yes I am glad I'm not responsible for developing any external tools for Haskell!
Do you need an alternative to GHC?
What's GHC's bus factor? (Actual question.)
The top 10 contributors are active and fairly well known in the community (I recognize 5 of the names at least), of 170 members with commit bits for the project.
https://gitlab.haskell.org/ghc/ghc/graphs/master
I would certainly like a compiler that was fast enough to be usable with realistically-sized codebases. I'm writing this while waiting a couple of minutes for ~10,000 lines of Haskell to compile.
Interestingly, compilation time was one of the motivations for Go.
Can you create a Haskell compiler that is faster than GHC? Probably. Can you create one that is fast? I'm much more skeptical of that.
Ironically the community designed language is so complicated that programs can only be mantained by small teams of original authors, while the tightly controlled language is consistent enough that large groups can collaborate on one program.
Haven’t most of the top Haskell devs been on the Microsoft payroll for years and years?
SPJ and a fe wothers work at microsoft, but they are few compared to the community. Likewise, several high profile haskellers were at Standard Chartered, but I don't believe this was significant.
I would say the main drivers of Haskell these days are academics, PhDs and consultancies.
SPJ is a Microsoft Research Cambridge employee, but as far as I know that doesn't give Microsoft Research (let alone Microsoft itself) any undue influence over the language's direction, they can't gatekeep or ram things into Haskell or GHC.
It would still be corporate sponsorship and not a true community effort though. Not having to work to pay bills makes it much easier to run a large project.
Pretty sure SPJ does have to work to pay bills. It's just that they work for a research institution and can thus do part of their work in / on the project. In no small part because CS research was one of the use cases for creating Haskell.
Majority of open source contributors have a day job.
And Haskell's market success is still very limited. This isn't a very compelling argument for community-driven language design. I think a better one would be Rust (to the extent that we can agree that Rust is designed by community) which seems to be getting a fair amount of market penetration given its age.
Haskell deliberately "avoids success at all costs" though.
It don't think it matters whether it avoids success deliberately or accidentally.
And the community is by far the friendliest I have encountered!
Wasn't Haskell created by a university? That would be more tax dollars than community-driven.
Haskell was designed by a committee of researchers from various universities, but the first(?) working compiler came out of a University of Glasgow project.
Aren't tax dollars the ultimate manifestation of "community driven"?
I think Haskell was largely derived from Miranda, a commercial product.
Perhaps 'inspired by', but not 'derived from'.
I've only followed the modules controversy tangentially, but I did go to a presentation by Sam Boyer, where he started getting emotional and muttering about organizing some sort of resistance movement. Russ Cox made it a point for Go's regex library to have bounded worst-case performance at the cost of worse average time, and Go's fast compilation times are a source of pride for the core team. That they would frown on a dependency management system like dep that is based on a NP-complete algorithm doesn't seem to have struck Sam as a total deal-breaker. In this respect I am fully behind Cox. As the OP says, the Go team has taste, and I want them to keep the language simple, sane and manageable, not a monstrosity like C++, Java and now sadly Python as well.
"... can't we have something like OpenGo ..." > " ... this won't happen ... "
I'm confused as to what's stopping someone from forking it, calling it OpenGo, and building a community around that.
This could be done, of course. But how likely is such an approach to succeed? It would effectively create a new language and in turn to a new ecosystem. Why not just use a different language (Rust comes to mind) then?
First of all, Rust is quite a different language to Go, so I wouldn't consider it a drop-in replacement. But this also means, that those, who don't like too many features of Go might really be happier with Rust.
But adding features to Go by popular demand would make it a new language and a new ecosystem. So if that is what people want, they should go ahead with it. Even if it just is an engineering prototype, which showcases features that Google should add to the Go main line.
> It would effectively create a new language and in turn to a new ecosystem.
And that new language would be Go++ (i.e. Go with generics) and what would be wrong with that?
Consider how C++ started.
It was nothing more than a preprocessor extension to the C language called C with classes.
There is nothing stopping that Go++ turning into the Go equivalent of C++.
So back to the previous OPs question, what's stopping someone from forking it?
Because after his experience being forced to drop into BCPL from Simula, Bjarne sweared not to do it ever again.
So when tasked to write his distributed network application in C at Bell Labs, his first step was to build something that would bring his Simula back, instead of bare bones C.
Sure, but that's a different issue. If it doesn't gain traction then the decisions made by Google are clearly considered the best approach, at which point what relevance does the openess have?
> If it doesn't gain traction then the decisions made by Google are clearly considered the best approach
I don't think this is necessarily true. There are a lot of fuzzy factors that go in to making a language successful and it's tricky at best to get these right, even if openGo was a better language.
As a practical example, go effectively has one person who's full time job is keeping the testing infra happy. It'd take time and money to establish this in a go fork and going without it would make development of the fork much harder.
> It'd take time and money to [develop a language]
Isn't that part of the fundamental conundrum though? If your need is so great, and so common, and so beneficial to get solved then the cost of development should be less than the benefits accrued, and smart business would bring that money and time to bear on the challenge and solve it. As Google has done with the main project...
Saying there isn't time enough or money enough to solve the technical challenges sounds like saying those challenges aren't important, or valuable, enough to be prioritized. Which is to say our wallets are in agreement with Googles wallet on the issue of generics in Go.
The considered 'best approach' from the community isn't the best technical approach to the technical issues, it's the best balance of pragmatic solutions for produced benefit (ie ROI), that create a sustainable project/product.
A major thesis of the article is that even if something was successful outside of the core team, they would ignore it in favour of their own ideas. Go modules is the reference case.
Go modules is exactly the reference case, but let me expand on that a smidge, because that was when this become crystal-clear to me.
There's one exchange in a blog post linked from the article[1] about dep/modules that I think is illustrative of the entire issue (double >> are quotes in the article from Google/go, single > are commentary from the linked blog author):
>>Although it was a successful experiment, Dep is not the right approach for the next decade of Go development. It has many very serious problems. A few:
>>Dep does not support using multiple major versions of a program in a single build. This alone is a complete showstopper. Go is meant for large-scale work, and in a large-scale program different parts will inevitably need different versions of some isolated dependency.
>Russ [a member of the Go team] has asserted this from day one, and has brought several examples out in evidence, but has simply not convinced the committee that it’s true. Calling it a serious problem, let alone a showstopper, is a significant overstatement. The strongest claim that can be made on this point is that it’s a matter of opinion.
That, to me, is that. Go is Google's language, and Google said that for them, not supporting multiple versions of a dependency was a showstopper. The community read that and saw it as a point for debate, and the author continues to try to debate it in the article.
And that's the issue! It was not a point for debate. Google was being forthright. Google was saying "from day one" it was a literal showstopper, and the community seems to have read it as a figurative showstopper. Who was right in this instance is irrelevant; if the community wants to litigate Google's decisions rather than integrate them into their tools/patches/etc., then the community will not get those things adopted into go.
[1] https://peter.bourgon.org/blog/2018/07/27/a-response-about-d...
> Google was saying "from day one" it was a literal showstopper,
For a long time people on the C++ standards committee insisted that we need trigraphs because it had to support systems that didn't even have ASCII. We still don't have pragma once as a standard replacement for include guards because other people seem to compile on some crazy network typologies where it cannot reliably identify files. Taking every "literal showstopper" serious without questioning its merits gets you stuck with C++98 and a lot of quickly accumulating legacy cruft.
> Dep does not support using multiple major versions of a program in a single build. This alone is a complete showstopper. Go is meant for large-scale work, and in a large-scale program different parts will inevitably need different versions of some isolated dependency.
This has zilch to do with not being community led, so perhaps the complainers should fish for better arguments. Rust makes the exact same call wrt. this particular issue, and it's very much a community-led language, with a public RFC process.
Rust does support multiple major versions of dependencies in a build.
The only thing we don’t allow is multiple copies of dependencies that link to native libraries, and the -sys pattern means that this is rarely an issue in practice.
Yes, but the folks who are now complaining about Go not being 'community-led' enough were pushing for a module system ("dep") that does not allow for this, and being told that not allowing multiple major versions in the same build was indeed a major problem (and even a showstopper) with their approach. I'm just pointing out that this is clearly a bad argument for calling Go "not community led"! Sorry if that was unclear.
Wait just a second! :-)
1. Yay!
2. Yes, native libraries would be difficult. Or impossible.
3. Could you elaborate on the "-sys pattern"?
https://doc.rust-lang.org/stable/cargo/reference/build-scrip...
Reminds of Java modularity debates, where OSGI folks kept railing against approach Core Java people in Oracle were taking. In Java's case it took really long time for Oracle to prevail over self-appointed community leaders and modularity experts pushing OSGI and come up with better solution.
I am happy that similar kind of situation did not prevail for that long in Go's case.
IIRC, one of the major differences between OSGi and the Java modules work (Jigsaw?) was that OSGi had major.minor.micro.stringy version numbers and Java had to have giant.major.minor.micro.stringy because Java's versions would always start with "1.".
And the final Java module system is much less flexible than OSGi (which is much more than a module system, which in turn is both a strength and a weakness in this case). Or do Java modules support multiple versions of the same package?
What has happened in the past is that features developed in a fork make their way back to the mainline - iirc in the Java world this was a thing with the virtual machine, particularly garbage collection. If a hypothetical OpenGo can solve the generics problem in a satisfactory way then it could make its way back into Go. Well, licensing and such notwithstanding.
Rust is not a suitable replacement for Go.
Why not? I consider Rust to be superior in almost all respects apart from learnability and compile times. So if these two downsides are not more relevant than the downsides of Go (in a certain context), I don't see any reason why Go could not be replaced by Rust.
It's not about learnability, compile times or anything like that - Go has a high-performance concurrent GC and Rust does not. There are whole domains, problems and the like that are basically unapproachable in Rust unless you're up for reimplementing half of a LISP system beforehand. (And people do - that's what the ECS pattern often boils down to!)
Rust is a systems language (as in C, C++, or Ada), Go is an application language (as in Java, etc.).
Ah, the Rust fanboys who vote down comments of anyone who criticizes Rust. They are actively trolling forums and spoil Rust for the rest of us. (IMHO, similar behaviour played a big role in the relative lack of success of CommonLisp, which is another language that I really like.)
So I know both languages, have chosen Go instead of Rust for a larger programming project recently, and have 30+ years of programming experience, so I feel somewhat qualified to answer the question you ask---if you meant it seriously, which I doubt.
People use Go for its simplicity, good tooling, good backwards compatibility, fast and modern automatic garbage collection, extensive libraries, and fast compilation speed.
Yes, Rust is hard to learn, puts a constant high cognitive load on its users even once they have learned it, is relatively fast moving - meaning code you write now will likely not be idiomatic in a few years from now -, and has slow compilation speed. It has many other strengths, as you rightly point out, but most of them will not be a reason for someone who uses Go to switch to Rust. For example, most people who use Go do not need or want to quench the last performance out of their CPU and are less obsessed with zero-cost abstractions. Many C++ programmers, on the other hand, might appreciate these features of Rust.
Rust and Go are simply not languages that compete with each other. Go is a competitor to Python, VisualBasic/Xojo, and various server-side scripting languages like Ruby and PHP. Rust is a competitor to C and some uses of C++, and maybe to languages like Ada and Haskell in some safety-relevant domains that do not require a formal language specification.
The OP could have just as well suggested to use Ada instead of Go. It is possible to write Ada like Pascal, making it almost as easy to use as Go, but the suggestion still doesn't make much sense.
I'm not a Rust "fanboy". I'm not even a Rust developer, but I know the language somewhat. I've had my time with Go, but wasn't satisfied with it. Some things I didn't like in Go are better solved in Rust (generics and error handling for example).
But thanks for replying seriously, even if you doubted that I wasn't only trolling (what was not my intent).
Running project like this is a full-time job. It can't be done in a spare time.
Momentum mostly.
Momentum.
The post makes it sound as if Go being Google's language (not the community's) is a bad thing. I don't see where this sentiment is coming from as the strong opinions enforced by Go core devs is probably one of the defining features of Go.
As with many open source projects under benevolent dictatorship, this can result in streamlined and consistent features with long-term success.
Are you by any chance also a royalist?
To quote the IETF: “We reject kings, presidents and voting. We believe in rough consensus and running code.”
There is an assumption here that the community is unified, you are part of the community, and therefore if the community doesn't get what it wants, then you don't get what you want.
Actually, the community disagrees on stuff, so you're unlikely to get what you want, regardless. The only way you always get what you want is if you write your own language. (Assuming you're skilled enough to implement it.) But a language nobody else uses is of little benefit.
There needs to be some core team that makes coherent decisions. Unless the community is tiny, the core team is not the community. There are inherently tensions. However it's unclear that Go's core team is doing worse at listening to their community than other language's core teams?
100%. Look at Ruby's community. It's a pretty great community in my experience, but at the end of the day Matz, Kokubun, etc. act as gatekeepers.
Democratic creation of software would be a disaster. There's so many philosophies of development and differing opinions that you wouldn't be able to make progress. At the end of the day the reason a language exists is to implement the vision of those who created it. The community is a labor force to implement, test, and verify the decisions made by the heads.
I totally encourage people to write their own languages though. A language no one uses can be beneficial to computing at large as experiments in language implementation. We all know silly languages like Brainfuck that you would probably never use for work, but can be useful learning tools.
It can be forked it if the opinions are widely shared and you think people will support you to run it better ? The tone of the article seemed to be casting shade relative to the institution funding it's development as the foundation of why there are problems ? It's normal in a large scale open lanuage there will be disagreement around direction of the project.
Many models around how those disagreements are resolved exist across many projects. You can choose one of them or build your own project and make your own decisions :)
I’m convinced that the people who just suggest forking an open source project with 100s or 1000s of man years behind it lack practical experience in building or maintaining large software systems.
Dismissing valid criticism with “just fork it if you don’t agree” has got to be some of the most useless advice parroted around open source.
To their credit, the Go team withdrew most of Error Values draft 2 [1] after a great deal of negative feedback. However, they've kept one unpopular change to fmt.Errorf().
We can hope that draft 2 of Error Handling [2] won't look anything like draft 1, for reasons such as [3].
[1] https://github.com/golang/go/issues/29934
[2] https://github.com/golang/go/wiki/Go2ErrorHandlingFeedback
[3] https://medium.com/@mnmnotmail/golang-how-dare-you-handle-my...
From [1]:
> The most contentious point of the original design was the special-case handling of a trailing ": %v", ": %s", and ": %w" in the format string, which did not follow the usual printf model in which the meaning of % verbs is context-independent and all non-% text has no meaning at all. We will remove those special cases from Go 1.13.
At least, they removed the really ugly special cases!
[1] https://github.com/golang/go/issues/29934#issuecomment-48968...
> Python has always been Guido van Rossum's language regardless of who he worked for at the time.
(from the article) I think the high-level point that Python has not been tied to a company is true, but it's not been true that Python is "Guido's language" for decades. Python has a very effective community process. Guido has certainly served as a tie-breaker and of course as BDFL (no longer!), but I see the language being largely steered by the community and community leadership.
I believe that solid (benevolent) dictatorship is better(faster, can make unpopular decisions, etc) than democracy.
I don't like any language, but tbh I hate go the least, and I believe this is because few very very dedicated people with enormous amount of experience can simply make unpopular decisions.
This.
But Go is one in a long line of proprietary languages that those of us who have been around the block know to stay away from. Recently: Java was Sun's, C# is Microsoft's, Swift is Apple's, Go is Google's. With any luck, all will be footnotes in ten years. Those of us who knew better than to get invested in them will be fine. Everyone else gets a chance to learn something.
Each of the languages you mentioned is wildly popular, well run, and not going anywhere. If you chose not to learn Java a decade ago because you "knew better" you aren't any better off than you were before, given that this language is the core foundation many companies and open source projects, maintaining relevance for 20 years. Not learning Swift now means you won't be able to do effective IOS development. Not learning Go means you're missing out on the language that a lot of important systems software is built in such as Kubernetes and Docker and won't be able to contribute to them.
I think also it's inaccurate to call any of these languages proprietary. They are all open source and if you wish you can fork them and make your own. C# in particular is now run by the dotnet foundation which is nonprofit with elected board seats, putting it in a better place for governance than many community driven languages.
Whether Kubernetes is important for you, and whether Go matters as a language for implementing it is very much the question. Kubernetes was started in Java, and Kubernetes developers themselves describe it as incoherent mess of dozens of daemon programs in various stages of completion in the process of refactoring Java idioms into Go while introducing redundancies due to Go's limited code organization facilities.
You can fork them and become (or remain) irrelevant.
Ios development is a walled garden. C# is barely used except to target Windos. Go? Too early to say. If it did fade away, who would miss it, really? Its express design purpose was to be not powerful enough to make big mistakes in. Has it transcended that? If so, what is its purpose now?
Java has shown staying power, despite its shortcomings, although its connection to the Apache Foundation ("where projects go to die") offers just a ray of hope.
I write software for embedded Linux devices in C# for a living, not exactly targeting Windows now is it?
That use qualifies neatly under "barely".
>C# is barely used except to target Windos
The Unity game engine use C# for scripting; I wouldn't call that "barely used".
Anything less than a billion lines counts as "barely used".
Anything less than ten billion lines counts as "little used".
How big is Unity?
Unity is behind "Pretty much half of all games " on all platforms [1] I suppose it's possible there have been 100K Unity games written if you include abandoned efforts, education etc. No idea of mean LOC but that multiplies out to a lot.
Also Xamarin which if I read this [2] correctly owns a third of the mobile app market. I don't know whether that's credible but apps are a huge market.
It is possibly that could get you the 10 bn. It is quite certain that there has been more than 10 bn lines written on.NET Framework, probably far more, as for 15+ years it has been the dominant language for corporate apps built on the Microsoft stack.
On top of that.NET Core is growing much faster over time,according to their blogs, and it's cross platform as you know.
[1] https://www.google.com/amp/s/techcrunch.com/2018/09/05/unity...
[2] https://www.datanyze.com/market-share/madp/xamarin-market-sh...
> With any luck, all will be footnotes in ten years.
Java and C# are not going anywhere, especially now that .NET works on OSX and Linux. Swift, unlikely. Go, still uncertain.
It's fairly short-sighted to criticize a language and hope it dies out in usage just because a corporation is chiefly responsible for it.
[0]: https://dotnetfoundation.org/about
Swift has garnered little traction outside iOS development, while other languages are gaining traction inside iOS development (javascript mostly, but Kotlin is also trying to make headway into iOS development).
I won't put survival rating for it to be very big. I'm sure Apple would keep it for decades to come, but I'm wouldn't bet on it being anywhere near as popular as it is now in the future.
IBM seems to be using swift on the server [1], probably most notably with their Kitura web framework [2].
TensorFlow also has a Swift API in development [3]. The people at FastAI seem quite excited about it [4].
[1] https://developer.ibm.com/swift/
[2] https://github.com/IBM-Swift/Kitura
[3] https://www.tensorflow.org/swift/api_docs
[4] https://www.fast.ai/2019/03/06/fastai-swift/
As long those languages aren't native, they won't be nearly as popular as swift. Swift doesn't need to be adopted outside of iOS development at all, because when people develop iOS apps the main choice will be Swift.
Swift is actually a very pleasant language, and apple provides some pretty good documentation although it could be better.
It's going to play out like this: Swift or Objective-C (for legacy codebases/people who already experts in objective-c and don't want to switch), and some other, slower languages that will always be second class.
> Swift doesn't need to be adopted outside of iOS development at all, because when people develop iOS apps the main choice will be Swift.
That would mean its fate would be tied to iOS, which is used only on phones built by a single company. When that company stopped producing mobile phones, or decided to use a different operating system, the language would die.
Correct, which would lead us to ask a pertinent question: Is this likely to happen in the next 10 years?
I would be inclined to say no, it's rather unlikely, short of apple somehow suddenly going bankrupt.
> Go, still uncertain
A lot of the tools in the modern ops ecosystem are written in Go, and that trend is not slowing down. Not even going to start to list them. A lot of devs also love Go and it has a very nice community. If you would have said this 5 years ago? Sure - but now it's past the 10y mark in age - I think it's safe to say Go isn't going to go away any time soon.
C# (and other .NET friends) aren't dying for a while, there's a large base of things written for multiple platforms on it, even though I think Microsoft isn't writing bunches of their core OS components in it.
Java isn't dying for a while either - anyone who still needs to support applets aside, or corporate applications written when Java was the fashionable thing to ship apps in, Android's install base will mean it's relevant for a fair stretch yet, even if OpenJDK fell into the sun tomorrow.
Swift I don't have much insight into, as I haven't done much with OSX or iOS in a while, and I have indeed not seen much uptake outside of those.
Which languages are you suggesting were/are good targets?
Python is technically not one company driving it but a bunch of developers on it are employed by large companies to work on it.
JavaScript is seeing a bunch of use in a great many places, but originated with one company's implementation and development.
Rust is pretty obviously one company's child, even though it is seeing decent uptake from other users. No predictions on whether it would survive said company dropping their work on it, though.
There is a mind-boggling amount of Java running around the world and more is still being written. Java won't die for several decades at least.
Experience can differ between companies and languages. I have been involved over the past year with Julia which is mainly ran and developed by the company Julia Computing. One might say that it is the equivalent operating scheme as the companies you mentioned. However, my experience with the community has been vastly different. If you go over the development issues list it is extremely satisfying to see how many are raised by the community and adopted into the language. In contrast to the omnipotent response of Google with the Go community. Having such differences implies that a company ran open source language can indeed be influenced by the community.
Having said that, just because it exists in Julia does not mean it can exist in Go. I just wanted to mention that there are exceptions to the rule which begs the question if it is a rule in the first place.
I agree completely, but good luck convincing people around here; I’ve had no luck for a few years now¹²³. It may be an age/experience thing – you might need to have been burned by it a few times before you learn not to build your house upon the sand.
1. https://news.ycombinator.com/item?id=8733705
2. https://news.ycombinator.com/item?id=17162582
3. https://news.ycombinator.com/item?id=18370067
Java is a footnote?
Not yet. But we can hope. Ten years is a long time in this business.
Core Java systems are being written and updated at: Oracle, Amazon, Google, Netflix, IBM and almost all the Fortune 500 companies.
Java is also used by Facebook, Microsoft, Salesforce, Apple, and many other companies that aren't necessarily known for their Java development.
Java usually is the base for every core banking system that's not old enough to have been written in COBOL (and even those are being migrated to Java in many cases) or hasn't been written from scratch in .NET.
The same Java is also used by governments throughout the world.
If Oracle vanishes completely tomorrow, there's huge incentives for a community initiative to completely take it over ASAP. There's probably tens of billions of dollars in existing code bases that have to be maintained and extended.
More than that, Java's already 23 years old. It's not a new language. Plus: https://en.wikipedia.org/wiki/Lindy_effect
Ten years is a very short time for a programming language. Python is decades old and hasn't hit its peak popularity yet. I don't know of a single mainstream language that has died, except for perhaps ColdFusion or ActionScript.
Depends on how you define death. There's ALGOL, SNOGOL, LOGO, Pascal, Visicalc, APL... They were all mainstream at a point, and they all still exist, but the userbase has become tiny.
OK, but it is "SNOBOL".
That too. :-)
> Python is decades old and hasn't hit its peak popularity yet.
What? Python's peak (2.7) has come and gone.
I really hope not. Modern Java is productive, fun and better performing than Go.
Modern Java is a pleasant experience compared to Go.
Can you elaborate? In what ways, is modern Java a more pleasant experience compared to Go?
Look at what happened to Kubernetes. They rewrote original Java code in Go and it's a mess because of Go's limited abstraction capabilities. Sure there's an element of rewriting in a new language and attempts to force idioms, but there's also the fact that Go literally hacks in special cases for generics, unavailable to users. They recognize the need for generics, but haven't implemented them, which is a problem for a complex project where abstraction might be useful.
If Graal had been mature at the time, I expect Kubernetes would still be written in Java.
If you mean due to AOT compilation to native code, there have been plenty of options since around 2000, their only "crime" is being commercial.
I'm aware, but I don't think any OSS project would have ever used them. Hence my mentioning Graal by name.
Ah ok.
Richer type system, richer concurrency, richer tooling, greatly advanced and advancing runtime, interoperability with multiple languages on the JVM, a naming system that doesn't cause frustrating workarounds a la the Kubernetes ObjectMeta / TypeMeta / "v1alpha1", "v1beta2", "v1" nonsense ...
Go is the language that taught me to really appreciate writing things in Java.
10 years is a long time in any business, and Java has been around more than 20. It is pervasive in enterpriseland; if nobody wrote a single line of greenfield Java code it will be around for decades. The COBOL of the modern era!
Java underpins Scala, Kotlin and Clojure.
So not just all the big corporates but plenty of tech companies like Facebook, Uber, Spotify, Twitter, Linkedin, Netflix, Apple, Google etc all rely heavily on it.
So what would you recommend?
Assembler^H^H^H^H^H^H^H^H^HMachine code:
http://www.pbm.com/~lindahl/mel.html
https://research.swtch.com/mel
What's wrong with C/C++?
Real Programmers write in Fortran.
Maybe they do now, in this decadent era of Lite beer, hand calculators and "user-friendly" software but back in the Good Old Days, when the term "software" sounded funny and Real Computers were made out of drums and vacuum tubes, Real Programmers wrote in machine code. Not Fortran. Not RATFOR. Not, even, assembly language. Machine Code. Raw, unadorned, inscrutable hexadecimal numbers. Directly.
(It's a joke...)
For what purpose?
I’m guessing you don’t use C or C++ either then because they were proprietary Bell Labs languages. You should probably rule out assembly too because they will have proprietary instructions for proprietary CPUs. Which means if you really want to be freed from the shackles of using proprietary tools when programming you’re now building your own hardware too, processors and all. Good job ASCII isn’t proprietary otherwise your computing device would have no compatibility with modern computers at all.
I know I get a little absurd towards the end but no more so than your remarks about Java.
Guess away, but both C and C++ broke out of Bell control very, very quickly by language standards.
Now they are driven by ISO Standards bodies, with many, many participants -- ATT not among them anymore, to my knowledge. Even in 1995 ATT had nothing approaching veto power.
My remarks lumping Java in with C# and Swift are transparent wishful thinking.
That was 10 years after it’s initial release and the spec hadn’t much changed from the specification defined by those Bell Labs employees. It wasn’t until 2011 that C++ saw some significant changes through the community. And in the first 10 years I seem to recall it was plagued by proprietary compilers having their own subtle behaviours.
I’m not saying it was all terrible nor that Go is managed better. But whenever there is a conversation about Go on HN people get so caught up in their own snobbery about how terrible they perceive Go to be that they lose all touch with reality.
The fact remains a language backed by a company is far less likely to die into insignificance than a language that isn’t. This is because it takes a lot for a language to gain momentum. You have a bit of a chicken and egg problem where developers won’t use a language without a good ecosystem, frameworks and community. But people aren’t going to write that if there aren’t already developers using it. This is where corporate sponsorship really helps.
Thankfully there are a plethora of good languages out there you can choose from. If you don’t like the direction of one language then you can use another. Or, alternatively, since Java, C# and Go all have open source compilers, you could fork and build your own community.
Those ISO standards cost 100 euros on average per language version.
For C and C++, you can get effectively equivalent working drafts for free. For C++, you can actually compile the PDF that gets sent to ISO to put the official markings on yourself. Here is C++17: https://github.com/cplusplus/draft/tree/c++17 .
If you have a contractual obligation that requires to exactly use a specific version of C/C++, then you'll need to pay money for the actual specification. In pretty much any other situation, the drafts are sufficient and perhaps even better (because they will have incorporated some errata).
Isn't it that all ISO standards cost money?
I'm sure you were perfectly aware of it, but C and C++ working drafts are almost identical (I think? Never bought an ISO version), and available for free from open-std.org. For example, http://www.open-std.org/jtc1/sc22/wg21/docs/standards
I like Go's concurrency approach and its concurrent GC implementation, but I personally don't like the language that much. It would be awesome if the runtime system could be factored out, and targeted by other languages.
No matter how good it is, I won't invest personal time in Google's products because Google like to kill Google products.
technically Go is not a product but your point still stands.
I really wish Go had learned a lesson from Java: programmers will eventually want generics, and adding generics to a language that was not designed for them leads to new and unexpected obstacles.
The lack of generics makes Go uniquely unsuitable for functional programming, an unfortunate outcome when functional programming is the New Cool Thing.
> New Cool Thing See that's your issue. Go tries super hard to _not_ be that and evidently has succeeded.
The original view of the golang authors was that it was made to replace C++. It obviously failed at that, and ended up being used as a faster Python.
There are 2 things I like about Go,
1. Compared to most compiled languages the toolchain is very easy. 2. Reduced ways of solving things in the language itself.
My main dislike of C and C++ is that there are numerous ways of solutions for the same problems, this forces me to weigh the solutions constantly against things like "Is this a safe way, does it create fast code, will this way fit when my project is further or must I constantly refactor my code". The extreme flexibilty of C and C++ are more a burden for me than an asset.
I personally also dislake C++'s standard library, lack of standard functionality in it and the difficulty of maintaining dependencies and the whole resulting build process with crosscompilers. GNU Make was not enough, CMake was not enough, GN is almost rocket science, then there is Meson. And choice of clang or gcc. Chrome/Fuchsia is an example of how complicated it can be. But does it have to be like that? There are definitely good reasons to use C++ or C in some cases (system, low-level, games etc) but C++ is a difficult language and it's not necessary for lot more other cases. Rust is yet another story, solving some problems but making new ones.
Go made me enjoy the programming again. And of course, managing modules is a very difficult task so it had to be done somehow and somehow really, really good and I think they delivered the working solution. Various previous attempts were bad for various reasons (Kubernetes can tell you).
Go is designed with Unix philosophy in mind by people who created C and then wrote Unix in it. So lucky us that they are the gatekeepers.
Having said that, Go's license allows everything and it would be interesting to see a fork and compare it with the mainstream a year from now.
There are many answers for why this won't happen, but one that does not usually get said out loud is that Go is Google's language, not the community's.
More precisely, its design belongs to a small number of capable people who are on the same page, with regards to a pragmatic, minimal-ish design. This is better than a "benevolent dictator," in that there are some checks and balances. It's also better than design by mass committee from the public.
No shit Google has the final say, they're the maintainers. They pay developers millions of dollars to make the final call on what's best for the language. Even if someone were to start a community fork there would still have to be a central board of governance.
Suggesting that we should split the community for a feature that you like in other languages is a dumb and lazy argument.
It is actually a good thing. The more forks there are and more complicated ideas and toys, the more mess and chaos. It can eventually kill the whole ecosystem.
I love open-source and libre software but there needs to be some central authority who oversees the changes and is forward-thinking. I actually love Go in the way it is and it wouldn't be it if you kept asking community what features to integrate. I am looking forward to generics as it solves some cases (containers holding items of a templated type sharing some common methods). But even if Go stayed the way it is, I would be fine. The point of open and libre in this project for me is that Google can't shut it down and that's enough for me in the case of prog language.
Please, if you want to make super-crazy-new community-driven uber-language which will change the world, make yet another and truly yours language or join Rust or whatever more community-driven language you like. Thank you! :)
The problem I have with the article is that, as an outsider, it reads like this is not about Google. It reads like this is about features that the author liked which were not merged.
In an alternative timeline where Go was not sponsored by Google, deciding what to merge into it and what to leave out would fall into some sort of "Core Team". In that timeline, if that Core Team decided to not include the features that the article author is describing, then we would have an article titled: "Go is The Core Team's language, not the community's".
I can only offer one answer. Maintaining an open source project is a lot of work. If you care enough about those changes, make a fork, include those changes, and maintain it yourself. If that's too much work, then you are stuck with what others able to dedicate the effort decide.
Do managers of open source projects typically get paid? Seems like a lot of work for a volunteer...
This is a difficult one. On one side we have a massive corp that is pushing it's own language and essentially steering it the way it wants.On the other, there's ever growing community that has its own ideas and suggestions on how the language should evolve. I'm too early in my journey woth Go to be able to have in-depth discussions on its functionality, however what I don't want to see it turning into another JavaScript,which is such a mess and free ride for everyone who thinks they have another good idea how to fix it.Unless there's a proper independent steering committee,I can't see this working too well without Google's ownership.
Yeah, that’s the case with any coherently designed language or software package. I don’t really have a problem with it. Now that they’ve come up with a workable module system, the issues of it coming from Googleworld are minimal. I don’t personally care about generics, and I like the error system so I find Go to be a great language to work in.
But that’s all just my personal opinions. What I found interesting was the callout of OpenJDK as a community-run language, but is it? I feel like OpenJDK is the worst of both worlds. Too much corporate overhang, and all the worst aspects of design by committee. But YMMV.
Oooph, from my perspective, this is the very same thing with clojure, and the single thing that drove me away from the language. Substitute spec/schema for vgo/dep, and you're telling the same story.
I value individuals and companies sharing their developments with a community. One has to remember though, to own something is different from benefitting from other's work, and if you want to have a say, you'll need to get your hands dirty, fork, and work.
Nice to read all the comments on this story. The story is about the how Go is not truly open source and community driven and the flame war starts on how Java does not implement generics correctly etc. Guys: each language has it's benefits and it's shortcomings. If we want (need?) more control over the language, can't we just fork it? Start a new language that is based on the current implementation of Go?
exactly. Or what prevents them from using beloved and 100% correct Java in the first place?
C# unfortunately has the same issue, but the effect is much more visible. Few people (I'm sure you can find examples, but given how popular the language is on Windows, it's a tiny minority) use C# outside of Windows and it's a shame. There is no good open ecosystem so even as a former Windows dev I hardly use it anymore.
The limiting factor for me, as a long-time C# dev now working outside of Windows, is that many of the tools I used haven't been ported to .NET Core.
Take Umbraco, for example. If someone were to port it to .NET Core and rewrite parts of it to use Postgres as a db option there would be zero reason to ever use WordPress again.
Other than like 90% of ISPs only offer cheap hosting for PHP.
You're right, although I'm not sure if people necessarily go to their ISP or to the cheapest host possible.
Most of the time, in my experience at least, someone that wants cheap hosting and is using something like .NET or Python will Google "cheap python hosting" and see what is cheapest/recommended.
PHP tends to be the outlier, because it's absolutely everywhere, but I think the web has matured to a point where people will look for specific hosting for their choice of tech. Hell, back in my freelancing days when I used to rebuild broken WP builds, most people that weren't given hosting by their client chose it from looking up "cheap wordpress hosts".
When time came to replace my XML/XSLT based website by something else, just to get up with the times, I resisted to touch PHP, but in the end having it at my ISP versus the trouble of using something else won.
Nowadays I am able to use PHP 7.x, so I just keep using it there, and suggesting it for the less tech savy friends that want some kind of dynamic website.
Because while I do build sites in Java and .NET, I do accept that they aren´t that easy to set up at most ISPs, and cloud based one click solutions tend to be more expensive.
Out of interest, what do you use instead? I have been trying to stick with C++ because it seems the one language that isn't too badly dragged into platform-specific fan clubs.
Depends on the purpose. I write mainly Python at home and at work, sometimes create small to medium sized Bash scripts, use PHP for my personal websites and most web projects, sometimes a few lines of Javascript (command line, I mean, not as part of html/css/js), sometimes some Java at work, LaTeX macros for documents... If speed is paramount (rarely) I'll also write C/C++ or maybe Go, but those are not my forte.
So it really depends on the purpose of the project, who else works on it, what it needs to integrate with, etc.
Thinking about what I use, C# didn't even come to mind. I only used it when required in school or when interning in companies that run Windows. I've written some C# on Linux at home but it's not much more than my experiments with Brainfuck were...
This is absolutely true. In general I feel like it's good when a language belongs to a core team, but they don't appear to be learning from past mistakes. The core team put out a proposal for modules and then immediately implemented it with very little community feedback except what could be done after the fact without breaking much. There was huge backlash and they claimed they would do better and seek community feedback earlier, but now they require that you use a Google web service when fetching modules (I've turned it off) and shoved that out between two versions without any community feedback as well.
Strong central leadership is great, but leadership needs to actually listen to the people they're leading (and not just as an after thought).
Agree that generics are much needed but also wary that if you let the community take charge then you end up with a frankenstein language like PHP.
Also a huge fan of go mod over vgo so I had no issues of them scrapping the community dependency manager and rewriting it to be part of the language
> In practice we'll only get a chance to find out who Go really belongs to if Go core team members start leaving Google and try to remain active in determining Go's direction. If that works, especially if the majority of them no longer work for Google, then Go probably is their language, not Google's, in the same way that Python has always been Guido van Rossum's language regardless of who he worked for at the time.
This is laughable. Anything built within Google belongs to Google. There's no way the company will let anyone from the core team leave and take the language with them. Keep in mind, Google's version of Go and the community might be vastly different, as the former has different needs versus the latter.
I really like C and common lisp in that they do not seem to be owned/driven by a singular entity
And that they have two excellent, interworking implementations. And a language spec, by ISO even.
I'm not sure what this article is trying to say. Microsoft C# is a Microsoft project. Go is a Google project.
There is someone who need to govern the project. This is a company in these cases.
The only other option I see is some board like (in case of .NET) the .NET foundation to allow more control.
Golang solves a specific domain problem google has.
They extended plan9s c compiler with some syntax Sugar for co routines and fixed some whacky c stuff then threw in some gc.
There's new and innovated about golang.
They just had to open source golang because they used the plan 9 c compiler which was open source.
A small group of programming language professionals with fifty+ years of shared experience developing compilers decided to keep on going with the language family they invented and extended.
I'd say they have fifty+ years of implementing the same language over and over. During the fifty+ years they studiously avoided learning what anyone else had been doing, and it shows.
Specially when one compares Limbo and Oberon-2 features to Go.
>> But Google is the gatekeeper for these community contributions; it alone decides what is and isn't accepted into Go.
So ... much like Linux where Linus is the gatekeeper, and he decides what goes in and what doesn't, and also abuses people while at it?
But he's trying to be nicer, so cut him some slack :)
People don't care who owns Go. They just write awesome software in Go such as Docker, Kubernetes, Prometheus, Grafana and VictoriaMetrics [1]. Go authors created simple, clear and productive programming language. Community-driven design for programming language may be disaster - look at incomprehensible C++ Frankenstein.
[1] https://medium.com/@valyala/open-sourcing-victoriametrics-f3...
this is exactly one of the reasons i learned Go in the first place. without the backing of a major corp it’s damn hard to have major success. not impossible, or unheard of, just hard.
I am about to learn a new programming language and I decided against Go just because of this fact. I do not trust Google and reading this article just makes clear how critical the state of the language is in terms of control by the community.
Python looks most promising and I already worked with it, but I am not sure yet. Can anyone recommend a viable alternative for Go? Any web-focused language that is performant, modern andalready well used?
You should use a language based on how well it performs for your problem and domain, and the community around it. Not based on one article or because google is maintaining it.
I think his fear of google's stewardship of the language is the fear of what it'll do to the community around it.
Its incredibly unlikely that Google would take decisions for the language that would make it less effective at what it does.
No, but it's very likely that they'll make decisions that'll alienate the community and thereby cause it's ecosystem to lose more and more relevance. It has done that with other open source projects they stewarded.
If they alienate the community, the community can fork it, right?
In practice it's hard to fork out from the main contributers and keep the branch alive without resources on par with the main branch.
Then you rather do some new language.
Only if there are major contributing developers who do not work for Google. Is that the case with Go?
If there are no major non-Google contributors to Go, then the fork may not be successful due to lack of familiarity with the code base.
It depends on what you mean by "the community". If it's a community of contributors, and Google pulls it into a direction the contributors don't want, then they can fork it and continue contributing to it. If it's a community of users, then they have no choice but to follow whatever the contributors decide. I agree you need to have major contributors on board with a fork.
There's more nuance to it though. Users eventually become contributors (at least some percentage are), they become and stay contributors when they feel heard and feel like they have the ability to influence development. That's what nurturing an open source community means. If you start alienating your non-core contributors they'll stop contributing, if you nurture and support your non-core contributors they might become core contributors. No body wants to work voluntarily on a project they can't influence - that's not a contributor, that's an employee.
Empirically speaking, how did they do with dart?
It seems you misunderstood my post, like some others in this thread. I did not decide against Go because of the linked article, but rather because I have the same view on the language that is outlined in the article. I have a big problem with Google. And the fact that Google practically own Go is a red flag for me.
I'll suggest elixir. It's fun to build in and the community is growing (at least two companies with very large scaling demands use it). Don't get hung up about performance benchmarks, they're biased towards numerical algorithms; in practice if you are web focused your tasks will be IO bound and things like uptime process restart semantics and robust concurrency are more important.
If you want performance and garbage collection, Java (and its family), C#, and Go are the major options.
I'd pick Go among those every day of the week, but it's not perfect.
By "major options", I guess you mean popular/mainstream options?
What you learn from dedicating time to a new language, should not be the ability to program in it in 10 years, but to solve a problem you have now and hopefully learn some new concepts. Pick what will keep you engaged - it's the "learning something" that matters if you're not explicitly trying to solve something.
There really isn't any with a substantial community.
Python is good for algorithmic stuff and things that need specific libraries (tensorflow, NumPy etc)
But if you want a garbage collected language for moving bits from place to place over the network, it's kinda hard to beat Go.
Are you saying that there aren't garbage collected languages suitable for network development ("moving bits from place to place") with a substantial community except Go?
I mean, Java and all the JVM based languages comes to mind. Python, Ruby, Node.js (assuming performance isn't at the top of your list, which for many it isn't).
Go has a small community compared to all of the other popular languages.
C#, F#, Kotlin, TypeScript/Node (someday Deno)
I wouldn't worry much about "performant" though.
F# and Haskell are two very nice languages if you want to try something truly different.
... and if you are ok with investing to adapt newcommers coming to your team and fine with lack of people actively using it
Both have helped myself and many other people write better code in "enterprise" languages. I don't mind my team using different languages for small bits and pieces that are one off things, or can be replaced without must effort. It keeps developers happy to have some autonomy. I only demand that it's well documented how to build it.
you shouldn’t let an article decide what language you learn.
How many do you consider enough?
none. articles on the web are made for clicks. judge a language on its own merits, not via what someone on the web said.
Kotlin
This is a very bad reason to not learn a language imho...
I'd say it depends on what you want to do with it. Go is, as far as I understand, mostly a systems programming language. A replacement for C, basically. That means it competes mostly with Rust I guess. (I'm not familiar with Go or Rust, though.)
Python is mostly an application programming language. It competes with Java, Ruby, C# and those kind of languages. Python also has tons of excellent libraries for a wide variety of specific domain areas, like Machine Learning (perhaps most famously at the moment), but also many others.
If you want specifically web-focused, Javascript or Typescript are the obvious places to go. Nothing is more web-focused than those two.
I wouldn't call Go a systems programming language. Having a GC alone sort of removes it from that category.
That sounds logical, and I'm no expert on either Go or systems programming, but I've often heard of Go being referred to as a systems programming language, and on their own blog[0] they list it in third place at 37% as popular use of the language. Well after web development, admittedly, so apparently it is more a web development language than a systems programming language. I was clearly wrong on that part.
[0] https://blog.golang.org/
Mesa/Cedar, Modula-2+, Modula-3, Oberon, Oberon-2, Active Oberon, Component Pascal, D, Swift, Sing#, System C#, Common Lisp, Interlisp-D, StarLisp, Real Time Java beg to differ.
Does anyone stops the author from forking it and starting an "OpenGo" and implement anything he cannot wait for mainstream releases?
I think Microsoft or Apple for example, made their own programming languages in much more ethical way than Google's sceptical which is: "ohh we are going to make our own language and get the help from the open source community." Everyone knows that companies are "exploiting" open source today which is really sad.
> Everyone knows that companies are "exploiting" open source today which is really sad.
Often enough it seems like companies go out of their way to deal with open source just for the "cred" and hiring opportunities that it brings. The open source itself is a just a drag on their internal team, which has to deal with tickets and contributions they don't want or need.
They don't need what? Free testing and debugging from thousands of people running their software on completely different hardware and OS? You may see useless tickets but companies are paying millions for QA and testing.
My observation has been that the free testing they get is often for cases they never hit in their production use cases.
For example, FB doesn’t use DraftJS on mobile, so it took a whole year long effort and a person who used to work for FB to fix mobile support. They just didn’t need it, didn’t have resources allocated to dealing wit it, and were understandably unresponsive on the subject. Now maybe they can use it in the future on mobile if they want, but they would have just fixed it if that need had ever arisen for them organically.
As 2014, i submit same topic, ask google open source golang under an opensource foundation , they did not want do it https://groups.google.com/forum/#!topic/golang-nuts/OhhG5lu2...
You could say the same thing about Linux. If Linus doesn't like what you made, it's not getting into the kernel.
There is a significant difference: Linus is a real person with an actual personality which you can get to know and reasonably choose to trust. “Google”, as an entity, not so much. Even we grant Google personhood, they’ve shown themselves, shall we say, less than dependable in regards to long-term support of their offerings.
Good. This is how you don't get C++.
Can the same sentiment also be shared for Swift/Apple? Or is Swift organized in a way that doesn't have these issues?
Maybe this is just an inherent problem for all company focused languages/frameworks (react, golang, kotlin, etc), and we need a good example of how to make it work for everyone
Yeah, the same story with Typescript and other languages. People learned nothing from Oracle stories.
I wonder if the same thing that happened to Android OS, will happen to Golang. Starts off as opensource and free and slowly tie the users down to Google as they did with App Services, etc. "Look but don't touch" kind of opensource?
I love Golang to some extend, the moment it's released, it's like new fresh air from Java. But because i don't like Google, i never adopt Golang.
Thanks God, we have Javascript, and maybe Rust.
> Thanks God, we have Javascript
Best language to write multi-threaded apps.
javascript, ah.... allright..
So proprietary is bad? I have no issues with things being owned by a single entity, especially when they're financially backed by that (massive) entity.
What is being proposed as the alternative here?
I even think that for language and base class library design a strong ownership is key to success. They make the language consistent and the library easy to learn.
In both aspects, just compare .NET/C# with PHP.
Exactly. I want my language to be crazy good and have an amazing library to tap into. In fact, it was Go's great standard library (JSON parsing, hardened prod' ready HTTPS server, SSH, crypto, etc) that formed a large part of why I love the language.
And yeah, I'd rather go with .NET/C# over PHP any day (although I understand PHP is getting better and better, which is good to hear.)
Well PHP and .NET do not play in the same league. But it is about consistent language and library design. Great failure in PHP and great success in .NET/C#.
I developed PHP till 2007, then as a hobby project in 2013. Massive improvement. And I heard very good things about performance the last years. Swoole even can compete with the top tear languages
I like Go a lot, but the fact that Google doesn't use it extensively in their own environment makes me wonder why that is. It almost feels like a Trojan horse.
Rewritting software is expensive.
I'd be surprised if any new language becomes prevalent in less than multiple decades within Google's unfathomable scale.
You want Go to be designed by the Internet equivalent of a cooperative? Isn't that how you get stuff like C++?
For some things Democracy is excellent (governance of nations, because it might be the least bad option). For other things a dictator is good (Amazon seems to be providing a lot of societal value under Chairman Bezos) . For yet other things perhaps a motivated group of technocrats might be better (Go).
I don't suggest I know the answer definitively. I could be wrong. But Go's results are good. Perhaps it would be fair for me to concede that I should examine this question again in 10 more years. What is good for 5-10 years might show problems in 20-50 years time.
After all, even communist North Korea was a relatively functional place for a time.
after using golang for a year, I thought it was limited but ok... then I switched to Rust once it was fairly matured - haven't looked back
I am kind of OK with that. C# is Microsoft's language, Java is Oracle's, Swift is Apple's, Rust is Mozilla's.
even if is baked by google, currently, they have a nice spot on the market, and I dont see any competition :)
what matter at the end is the result.
Some hard decision was made for the better of the language and its users at the end. Thank you for the courage taking the hard decision.
If want to argue, argue that other package manager do better job than modules instead of community did effort and it goes for nothing but experimental.
Similarly Rust is Mozilla's language. If you want a real open Language, use the open source F# or C#.
What makes you say that?
This is the same thing with Rust, where original core team are Mozilla's employees.
Many (most?) people on Rust's Core/Governance Teams don't work for Mozilla (anymore).
See https://www.rust-lang.org/governance/teams/core
(3 out of 9 people are Mozilla employees)
And for most of the SMB it's cool they can ride Google's horse for free.
Fcuk the community, according to history and experiences.
Rust is another choice ~
Rust is your choice :)
Free as in, beer.
wrong on so many levels.
This post seems to conflate implementation and specification.
All the required bits (permissive license, specifications, etc) for writing a community tool chain (compiler, linter, fmt) are in the open.
Won’t the oppressed community rise up for itself and end this programming language tyranny!
Projects are directed by those willing to put in the work. If you don't put in the work, it's not yours alone!
Falling for the go meme, EVER
What a surprise (not).
Sounds like Laravel
ᒪᘯᒪ ᒥᒭᘯ ᕴᕮᘉᕮᖇᖍᑕᔕ
I pity the recent CS graduates that weren't able to attend a decent CS degree and need Go to fit their mental model.
Haven't we had to ask you before not to do programming language flamewars on HN? Please don't do programming language flamewars on HN.
We detached this subthread from https://news.ycombinator.com/item?id=19978882 and marked it off-topic.
I most likely missed it.
Although to be fair you need to apply the same rule to others as well on this same thread.
Strictly yes, but we can't come close to reading all the comments here. If you think we missed something egregious, the likeliest explanation is that we didn't see it. You're welcome to let us know. In general, though, it's like getting a speeding ticket even if some other guy was going faster and didn't get a ticket.
That's not really a problem for computer science students, I think. Their area of focus tends to be on the algebraic, theoretical side of the spectrum. It's much more of a problem for computer (software) engineers.
Be careful. Rob Pike asserted Googlers can't handle an "advanced language" or whatever his exact words were, but he provided no evidence this was true, or that others at Google agreed with him, or that this was even the reason Go was created to begin with.
The most obvious evidence he's wrong is that Google's codebase is all C++ and Java, both languages that have generics and other more advanced features than Go. So apparently Googlers can use such languages just fine. What Pike was claiming was that Google had at some point dropped their hiring bar dramatically and somehow nobody had noticed this or commented on it, despite how controversial it'd have been inside the company.
I rather think that this comment about Go is not reflective of any real strategy or decision inside Google. Go's creators were bored and wanted to make a new language, so they did. The justification for what it looks like was then retrofitted on top.
> The most obvious evidence he's wrong is that Google's codebase is all C++ and Java, both languages that have generics and other more advanced features than Go.
Alternatively: Seeing people continue to make mistakes in those languages that are widely used has informed the position about language features and design aspects that are problematic, and things that would be helpful to have. Proposing a new language to avoid them is an (admittedly fairly extreme) solution to that.
Occam's Razor: Did he really make up a strange claim that reflects negatively on his employer, or was something very much like what he said actually part of his brief?
I believe it was the latter, the twist being that it was forward looking, not reflecting then-current reality. It was likely someone else's prediction that the quality of talent available to Google will go down, going forward.
The quote in question:
> "The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."
Full context: http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Fro...
Which boils down to the same issue.
Back home CS means Informatics Engineering, a degree certified by the Engineering Order to practice, with enough theory and practice spread across 3 - 5 years.
Those that want CS theory without programing have a major in Mathematics with a minor degree in computing.
What country is this? It’s certainly not that way in the USA at most universities.
Portugal, and many European countries have similar offerings, at least the southern ones.
We have Theoretical Informatics as a field of bachelor study in the Czech Republic, software engineering is called Applied Informatics
Go is a language for software engineering, which entails pragmatic human factors. It's not a computer science language, which emphasizes theory.
> It's not a computer science language, which emphasizes theory.
"Modern" type theory has been around since the 70s but aparently that's still too academic for Go.
But perhaps Canadian Aboriginal Syllabics are still the more pragmatic approach!
https://www.reddit.com/r/rust/comments/5penft/parallelizing_...
Eiffel and Ada are languages for proper software engineering.
Go not so much, specially given its toy type system, where one cannot even do Type Driven Design.
> proper software engineering
Only if you define this in such a way that it's tautologically true. If your definition of software engineering incorporates some degree of economic success, then no, these languages don't work well.
I'm sure someone will be tempted to levee this strawman argument: "But lots of languages drive projects that make more money than Go!" Note that I never claimed market success was the sole criteria.
Well, then pick C++, Java and C# instead.
All of them provide more features for software engineering at large scale that Go is still catching up with, hence Go 2.0 proposals.
There’s no evidence that those languages are better at engineering at scale than Go is. In particular, all of them are riddled with anti-features e.g., inheritance and all of them have ridiculously convoluted tooling (especially build tooling a la CMake). If I want to get something done, especially in a team setting, I’ll reach for Go over those languages every time.
I look at it as a collection of any guarantees that you can leverage to ensure a (relatively) high degree of correctness, performance, and maintainability. I can't fault Google for valuing these things over agency to play with languages that feel more fun/interesting.
Although true for programming CS, I have to note here that Google hires people who didn't study programming per se. One guy I know who worked at Google did processor design. They hired him for web programming or something about as unrelated as possible. Who knows how many extra-bright, barely-programming grads they pull in. They need them to learn quickly to be productive as closely to day 1 as possible. Hence, Go.
I think Python was main language in that space before Go. It's not as simple or performant as Go, though. So, this was a major improvement even if I think they could've designed an even better language from our perspective.
Um, Yeah, GOOD! This demand for Generics is pure sheep brain garbage. Once you learn to write software properly you find only like 1 or 2 small use cases where Generics might actually help
Down vote me all you want. Been writing production software in Go for nearly 4 years and have never asked or needed Generics. Saying that there isn't a community because Generics are wanted by "some people" is daft.
The whole point of Go is to be small and opinionated and committees can only produce outcomes that are large and inclusive of all opinions.
To determine how open a language really is look at how many widely used implementations of the compiler there are for the language. If there is only a single implementation of the compiler/interpreter than it is not really open but controlled by that core compiler team.
Most languages I can think of have one very dominant implementation and maybe another one or two that few people use. Python, Ruby, Java, C#, Go, Rust, Haskell... C/C++ are the only exception since Clang became serious competition to GCC and Visual Studio. Even Javascript only really has Chrome and Firefox.
That's "consumerization of IT" for you. Millenials and younger are used to looking at pretty websites of "language ecosystems" and blogs about trivial programming problems to assess languages, rather than independence, maturity, and long-term viability as they used to before, and as demonstrated by having language specs and multiple interworking implementations, pluralism of APIs, etc.
I don't think the number of implementations correlates strongly with long-term viability. As long as there is at least one FOSS implementation available the language won't die as long as it remains useful.
What about Intel's compiler for C++? Or Borland's? Or EDG's? Or DigitalMars? They were around for a long time.
They fall under "almost nobody uses them".
So, there's your answer right there.
JS has Safari (JSC) as well.
I don't think that is necessarily true. Rust has only one compiler (and I think adding a second one would be a huge effort with no clear win) yet it is developed by a community that is very welcoming to newcomers. At least, that's been my experience.
It is definitely true. There's virtually no room for improving the underlying compiler toolchain because there's no allowance or support for an alternate implementation (using GCC, for example).
Go at least has gcc-go, which is a compliant implementation that gets some benefits from being part of GCC (like not completely broken support for dynamic link libraries!).
Having a second compiler implementation forces the language grammar and behavior to be specified in a formal manner that allows anyone to understand how it works.
How is there no allowance? A lot of people in the community would love a rust compiler that can compile down to C for instance.
Maybe the tool chain is hardcoded to use the standard compiler? But that's more likely because of YAGNI, not because the rust community hates freedom.
The toolchain is built on top of rustup which very purpose is managing multiple compilers. If one were to build a different rustc it could be swapped in even in the world we have today.
That's cool. I haven't used Rust yet so I didn't know.
I haven't thought consciously about that, but it turns out that I've been doing that kind of validation unconsciously. I didn't get interested in actally using D, for example, until the GDC variant arrived (Dlang-support in the GCC collection).
Gcc compiles Rust fine. Its borrow checker won't bark at you, but that doesn't affect code generation.
OCaml and Haskell are one-horse towns, with substantial history, that are not going away.
But as Stroustrup says, there are languages people complain about, and languages no one uses. You generally end up better off investing your time in mastering a language people complain about. Bitterly. Once that meant FORTRAN.
There are at least four Go compilers: the original one from google, GCC go, gollvm and GopherJS.
The TinyGo project are implementing a Go compiler on top of LLVM, to use Go on very small devices:
https://tinygo.org/
The Go language is defined by a spec with compatibility guarantees, but beyond embedded systems and WASM, there don't seem to be a lot of use cases for alternate implementations.
I haven't looked into compiling Go to JVM bytecode, but a solid way to do that would resolve an ongoing tug-of-war at work, and avoid a lot of résumé-driven development of stuff maven central already had.
Afaik GCC's Go has been pretty complete and up-to-date for a couple of years.
You mean for example various versions of java which are incompatible?! Or java programs said to be mtiplatform loading .so libraries of fixed Ubuntu versions? Well, that's exactly what I hate.
Seeing Huawei got recently blacklisted for all US made chips, parts and some Android services from Google, an interesting question to ask is whether the US government will one day force some foreign companies (e.g. Huawei, ZTE and DJI for example) to stop using programming languages invented & implemented by US companies.
That's not how open source works.
The license can change any time they want. They can change the license to prevent certain companies from using future releases and/or security patches.
Let's don't forget the fact that Huawei had legally binding contracts with all those US suppliers.
> The license can change any time they want. They can change the license to prevent certain companies from using future releases and/or security patches.
Until then, it's a bsd inspired license. And you get code from Cox, Griesmer, Taylor, Pike, Hudson and Clements under those terms. It's a big team on several aspects.
So you can remove your tinfoil hat and check the code for yourself. It is valuable.
So? Just fork the last freely-licensed Version then.
then start to use clean room process to reimplement all future bug fixes? nice!
Where comes the expectation from, that someone else fixes your problems for free? If you clone Go now, you get a robust codebase for free, and you could do any maintenance on your own (as a reasonably large organization)
I don't see how this even would be possible. It's not like they disclose the source code of whatever software they write in order to sell in US.
What a strange conclusion. Not likely, but much more likely than that, is that they'll force foreign companies to use Go and other products controlled by US corporations.
This is an important point that people need to keep in mind. From a psychological perspective it is also a little bit evil in that it "feels" like a community and people participate like its a community, but the reality is that it isn't a community at all in the communal action sense.
Essentially Google gets "free" developer time as people work on problems and pitch in possible solutions. Google can influence what is worked on by whining about things, and they are free to take or discard the offerings.
This isn't particularly surprising to me as it feels very similar to the way I felt when working there. Which is that people in engineering worked on projects that they were passionate about, but whether or not those projects got support, or were shipped, or released into products. That stuff was all done "elsewhere" by some group of people who were generally known, but not really part of the day to day. I tended to think cynically of them as 'class A' shareholders, or 'class B' shareholders[1].
I don't think there is anything wrong with managing a program this way, I do however think they go out of their way to create a community illusion to foster more participation. That tells me that if they were up front about things, they feel people might not be as eager to participate. To the extent that they are deceptive in their communications, that I would consider wrong.
[1] Class B shareholders get to vote on shareholder issues, but there is always more voting power in the class A shares so that the class A folks can veto or reject any notion they dislike, no matter how popular with the class B folks.