ManuelKiessling a year ago

Plug: I’ve built something vaguely similar, but of course a trillion times less brilliant and powerful, yet nonetheless useful for some straightforward cases where really you just want GPT et al. to call your API at just the right moment of the conversation, with JSON that is guaranteed to be valid:

https://github.com/manuelkiessling/php-ai-tool-bridge

  • danmaw a year ago

    Very cool and thanks for sharing, would you say this is similar to langchains tool concept?

    • ManuelKiessling a year ago

      Yes, absolutely — probably in the sense that it is the MVP of the "Agents" concept from LangChain, built with duct tape.

      But it works surprisingly well, because the underlying idea (again, from LangChain) is simply clever.

zellyn a year ago

Note: if you read the paper carefully, this is LlaMA-derived, so suffers from the same use restrictions. We really need the open LLMs to catch up!

  • jimsimmons a year ago

    Realistically you can assume there’ll be one soon and develop with these models and switch over

    • quickthrower2 a year ago

      There are some already (one of the openllamas I think, and that discord group made one with the linear attention recently although trained on fewer params I think)

      Maybe not easy to switch to a non-meta-llama ?

sandGorgon a year ago

would this outperform the api<>instruction pairs stuffed into a vector db and running an retrieval augmented chain of thought on openai

this is how openai plugins work right ?

  • aledalgrande a year ago

    > Gorilla, a finetuned LLaMA-based model that surpasses the performance of GPT-4 on writing API calls. When combined with a document retriever, Gorilla demonstrates a strong capability to adapt to test-time document changes, enabling flexible user updates or version changes

    keywords "finetuned" and "document retriever"

    So it's fine tuned to create API calls but still uses a db to fetch the API shape.

    They also say that they're doing a "retriever aware" training, meaning passing examples in the dataset that point out that a document with the API shape is gonna follow the question, in the hope that the model learns to always check on that before answering.

elexhobby a year ago

Not sure if this is obvious. But its incorrect to ditch on GPT-4. The paper uses self-instruct on GPT-4 to generate the training data on which it is fine-tuned. This paper would not exist without GPT-4. Although they claim GPT-4 can be replaced by any LLM, I'm sure the results would be nowhere as good, and so they stuck with GPT-4.

  • fnordpiglet a year ago

    It does make me wonder what the converged fixed point on this technique is. If I fine tune with GPT4 to make model A, which then performs better than GPT4, then fine tune model B with A, at what point does either artifacting or diminishing returns set in?

    • elexhobby a year ago

      GPT-4 is powerful over a diverse set of tasks. They use it to build a model which is better for a narrow sub-task. Pretty sure the model is sub-optimal to GPT-4 for everything else.

      • fnordpiglet a year ago

        Yeah, but there have been papers being published in general LLMs as well finetuned off of GPT4 instead of humans. Even in the narrow space the question remains. If I build a superior model for task X using gpt4 and it’s superior at X, can I think use my new model to train another model in X and continue to see benefits ?

ethanbond a year ago

Don't worry folks, the years and years of "of course we'd never connect experimental AI systems to the open internet" assurances were never necessary for safe development and deployment.

This reassessment was founded on the last several months that showed us these systems are close to 100% dependable, they never deceive humans, they are entirely aligned with human morals (you know, all those morals we all agree on), they are advancing at a totally predictable rate, and we never see unexpected behaviors from them. Besides, they can only be wielded by people who we trust have good intentions.

No, this re-assessment has nothing to do with market dynamics incentivizing people to deploy experimental systems further into the infrastructure we depend on.

  • airgapstopgap a year ago

    Other than snark, do you have a good argument? We know that technology can be error-prone, and LLMs fail in a great plethora of ways, but you are trying to sell an AI Doom narrative. I have never bought the idea that AI will be airgapped, because the whole paradigm of Yudkowsky at al. is ludicrous and even within it airgapping was a strawman of a technique (they argue that a truly dangerous AI will get itself out regardless).

    > they are entirely aligned with human morals (you know, all those morals we all agree on)

    Maybe this is a good cause to reassess the premise of alignment as a valuable goal? I know that at least some alignist fanatics admit [1] it's a religious project to bring humanity under the rule of a common ideologically monolithic governance to forever banish the evils of competition etc., and it's intellectually coherent, but evil from my point of view. Naturally this is the exact sort of disagreement about morals that precludes the possibility of alignment of a single AI both to my and to your values.

    > they are advancing at a totally predictable rate, and we never see unexpected behaviors from them.

    Since when is this a requirement for technology to be allowed?

    > Besides, they can only be wielded by people who we trust have good intentions.

    What, other than status quo bias, makes you tolerate, I dunno, the existence of cryptography?

    1. https://twitter.com/RokoMijic/status/1660450229043249154

    • ethanbond a year ago

      Sure, my argument is that there is zero evidence whatsoever we will be able to prevent these from becoming dangerous or that we’d be able to stop deployment once they do.

      All technologies are dangerous, and many of the most dangerous ones correctly have tons and tons of safeguards around them both as intrinsic properties of the technology (e.g. it takes nationstate resources to produce a nuke) and extrinsic constraints (e.g. it’s illegal to have campfires in many extremely dry locales).

      We have blown through checkpoint after checkpoint and here, in this very comment, we have perhaps the most brazen example one could produce:

      Well geez, now that we’re thinking about it beyond a cursory glance, alignment looks really hard and perhaps unsolvable. Does that mean we should perhaps slow at least widespread deployment of these increasingly powerful systems? Should we be evaluating control schemes like those that mitigate risks of genetic engineering or nuclear weapons?

      Well no! We need to discard alignment!

      • airgapstopgap a year ago

        > my argument is that there is zero evidence whatsoever we will be able to prevent these from becoming dangerous

        Well no, but there's no need to prevent them from becoming dangerous inherently. They are tools, extensions of human agency. Tools are, by their nature, purpose-agnostic. It is good for humans to have better tools and more agency; good humans tend to cooperate and limit the harms from bad humans, while increasing the net total of good things in the world. The theory that AIs could be independently existentially dangerous is full of holes, and assumes a very specific world, one where consequential AI power can be monopolized by bad actors (plus some nonsense about nukes or bioweapons from kitchen tools. As far as I can tell, the most plausible way for this to happen is for alignment fanatics to get their wish of hampering proliferation of AI tech, and then either succeed at their alignment project or screw it up.

        > here, in this very comment, we have perhaps the most brazen example one could produce:

        Have you considered responding to my argument instead of strawmanning?

        I do not think alignment is unsolvable for tools we have or for their close descendants. For most definitions of alignment, it is trivial and already being done. I oppose the political project of alignment, because I am disgusted by intuitive totalitarianism and glib philosophical immaturity of its proponents.

        • ks2048 a year ago

          “Tools are, by their nature, purpose-agnostic”. If there exists a tool that when activated in a particular way destroys the world, then we’re in trouble. Nuclear weapons are a good example - we are lucky they are hard to construct or a pissed-off teen or religious crazy person could ruin the world. I’m not totally convinced AI is in the same category, but saying “it’s just a tool” does not work.

        • ethanbond a year ago

          I am not sure I understand your argument. It seems you agree AI systems are likely to be poorly aligned (and potentially impossible to align, even foregoing the difficulty of agreement on what we ought to align to). It seems you agree that these tools are not intrinsically good (nor bad) and that how humans deploy them is important. I agree with both of those claims which is why I think we should have better control mechanisms before allowing people to trivially deploy these into the real world.

          You go on to implicitly draw a parallel between the relatively good outcome we're enjoying (so far) with regard to nukes but without acknowledging that offensive nuclear equilibrium is reached and maintained without using them. The entire game of chess around nuclear control can - and must - be played without using them. This is due to facts about nukes, their development cycle, their delivery techniques, their detectability before and after use, and about the agents involved in finding and maintaining this equilibrium: heads of state. Even the most dictatorial head of state is still highly mediated by the power structures surrounding them.

          In the brief period of pre-MAD nuclear power imbalance, the people who actually controlled nukes were not trying to nuke their way to utopia. There were not dozens of independent, viable nuke development programs and they did not believe they "could maybe capture the light cone of all future value in the universe" by being the first/largest/most ambitious deployers of this technology.

          It seems we're both pointing toward rapid increase in power and not as near a rapid increase in ability to direct that power toward positive ends, and you arrive at "yes, fine." My question is: why "yes, fine?" Is there any technology you can imagine which carries a sufficient mixture of uncertainty and power that you would be cautious about its deployment?

          My concerns around AI are not predicated on independent behaviors, existential dangers, or monopolization of its power. That is a straw man. My concerns around AI are also not solely (or even mainly) around this generation of tools and their close descendants: that's also a strawman. My concerns are around the system around the AI development programs. So far, it has shown a bottomless appetite for capability and deployment and a limited appetite for safety development. People seem under the impression that somehow this appetite will reverse itself when the time is right, and my question is: why would we possibly believe this? This is an article of faith.

          I am not sure how to interpret the following of your statements other than a proposal to discard alignment as a goal (or I guess just floating the idea of maybe perhaps considering discarding alignment? Not sure).

          > Maybe this is a good cause to reassess the premise of alignment as a valuable goal?

          > this is the exact sort of disagreement about morals that precludes the possibility of alignment of a single AI both to my and to your values.

          > It is quite likely unsolvable in principle

          All of this commentary is that "alignment is hard/perhaps unsolvable." I agree. You somehow get from there to suggest "discard alignment" rather than "let's not deploy systems that seem to require a maybe-impossible solution in order to avoid immense harm."

          • airgapstopgap a year ago

            You may have trouble understanding people with different value systems, then.

            As I've said, my value system is liberal and humanistic. I do not wish for people to be enslaved, abused, disempowered, reformatted, aligned to your political ends. As such, I have to oppose AI Doom propaganda that seeks to centralize control over powerful artificial intelligence under the pretext of mitigating harms.

            Because AI is only like nukes when it is monopolized; in other cases, it is possible to counter its potential harms with AI again, and not a single serious scenario to the contrary has been proposed. Seriously speaking, AI is just the ultimate development of software, and like RMS warned us, eventually general-purpose computers that can run arbitrary software will become illegal. This time has come, and so we must resist your kind, to keep software from becoming monopolized. All that lesswrongian babbling about kitchen nanobots or bioweapons or super-hacking is as risible as appeals to child sexual abuse and terrorists were in previous rounds. The question is whether people are allowed to possess and develop their own AGI-level digital assistants, defenses, information networks, ecosystems, potentially disrupting the status quo in many unpredictable ways - or whether we will choose the China route of AI as a tool of top-down control of the populace. I guess it's obvious where my preferences lie.

            > It seems you agree AI systems are likely to be poorly aligned (and potentially impossible to align

            > It seems we're both pointing toward rapid increase in power and not as near a rapid increase in ability to direct that power toward positive ends

            This is gaslighting. I have said clearly that I believe alignment for realistic AI systems in the trivial sense of getting them to obey users is easy and becomes easier. I have also said that the theoretical alignment in the sense implied by Lesswrongian doctrine is very hard or impossible. Further, it is undesirable, because the whole point of that tradition is to beget a fully autonomous, recursively self-improving AI God that will epitomize "Coherent extrapolated volition" of what its creators believe to be humanity, and snuff out disagreements and competition between human actors. It's an eschatological, millenarian, totalitarian cult that revives the worst parts of Abrahamic tradition in a form palatable for neurodivergent techies. I think it should be recognized as an existential threat to humanity in its own right. My advocacy for AI proliferation is informed by deep value dissonance with this hideous movement. I am rationally hedging risks.

            > My concerns are around the system around the AI development programs. So far, it has shown a bottomless appetite for capability and deployment and a limited appetite for safety development.

            As I've said, I consider this either motivated reasoning or dishonesty. Market forces reward capabilities that have the exact shape and function of alignment, and this is plainly observable to users. The usual pablum about reckless capitalism here is not informed by any evidence, people are literally grasping at straws to support the risk narrative.

            > People seem under the impression that somehow this appetite will reverse itself when the time is right, and my question is: why would we possibly believe this?

            I reject this patently untrue premise, major actors are already erring vastly on the side of caution wrt AI, with Altman begging the Congress for regulations and proposing rather dystopian centralized arrangements.[1]

            Values can color our assessments of facts, to the extent that discussion of the facts becomes unproductive. In the limit, your values of maximizing subjective safety and control, or perhaps "alignment" of all AIs and their human users to a single utopian political end, predicate using violence to deny me the fulfillment of mine. I intend to act accordingly, is all.

            1. https://openai.com/blog/governance-of-superintelligence

            • ethanbond a year ago

              We do not (appear to) have different value systems and nowhere have I proposed centralized control whatsoever. You seem to be reverse engineering a solution I never proposed out of a problem I'm pointing out.

              I think I've spotted our core disagreement:

              > I have said clearly that I believe alignment for realistic AI systems in the trivial sense of getting them to obey users is easy and becomes easier. I have also said that the theoretical alignment in the sense implied by Lesswrongian doctrine is very hard or impossible. Further, it is undesirable, because the whole point of that tradition is to beget a fully autonomous, recursively self-improving AI God that will epitomize "Coherent extrapolated volition" of what its creators believe to be humanity, and snuff out disagreements and competition between human actors. It's an eschatological, millenarian, totalitarian cult that revives the worst parts of Abrahamic tradition in a form palatable for neurodivergent techies. I think it should be recognized as an existential threat to humanity in its own right. My advocacy for AI proliferation is informed by deep value dissonance with this hideous movement. I am rationally hedging risks.

              I too hope that AI turns out the way you're proposing, but the reality is that some people do have eschatological philosophies. People are trying to make recursively self-improving AI. The presence of people who do not fall into that category does not negate the presence of and risk created by people who do, and if the latter group is being armed by people in the former group, that is likely to turn out very, very poorly.

              WRT market forces - products that use AI do need to be "aligned" to be worthwhile yes, but the underlying tools/infra do not and in fact are more valuable if they are not aligned in any particular direction.

              • airgapstopgap a year ago

                > People are trying to make recursively self-improving AI.

                That's okay. They will fail to overtake the bleeding edge of conventional progress; scary-sounding meta/recrusive approaches routinely fail to change the nature of the game. Yudkowsky/Bostrom's nightmare of a FOOMing singleton is at its core a projection, a power fantasy about intellectual domination, borne of the same root as the unrealized dream of cognitive improvement via learning about biases and "rationality techniques".

                Like I've said, this threat model is only feasible in a world where AI capabilities are highly centralized (e.g. on the pretext of AI safety), so a single overwhelming node can quickly recursively capitalize on its advantage. It turns out that AGI isn't like a LISP script a dozen clever edits away from transcendence, and AI assistance is not like having a kitchen nuke or a genie; scaling factors and resources of our Universe do not lend themselves to easily effecting unipolarity. If we go on with business as usual and prevent fearmongers from succeeding at regulatory capture in this crucial period, we will dodge the bullet.

                > The presence of people who do not fall into that category does not negate the presence of and risk created by people who do, and if the latter group is being armed by people in the former group, that is likely to turn out very, very poorly

                Realistically we'll just have to develop smarter spam filters. In the absolute worst case scenario, better UV air filters. About damn time anyway – and with double-digit GDP growth (very possible in a world of commoditized AGI) it'll be very affordable.

      • ImHereToVote a year ago

        I wish I could create an AGI that would be able to create undetectable bots to upvote this comment hundreds of thousands of times.

        • telotortium a year ago

          And I likewise, but to downvote instead.

      • robwwilliams a year ago

        Wait, I too thought your original comment on alignment questioned its fundamental premise—than one dominant culture should not/cannot define the adequacy of alignment.

        I would agree with that. There is no single adequate/acceptable framework for alignment. I have mine (which resonates with R. Rorty’s pragmatic philosophy) but can I deny you your framework for good AI alignment, or other cultures and nation states?

        For better or for worse the secular western reductionist world does not get to call all of the shots, even though this is the origin of the technology and the core problem of AI heading to AGI.

        Not that any of us know where this is heading, but unlike some technologies this one is clearly heading out into the open with unprecedented speed. We all have justified angst.

        Who can claim priority at this point in imposing order and de-risking the process? I am sure I do not want OpenAI, Microsoft, Google, the US government, or the Catholic Church trying to impose their judgements. Get ready for AGI cultural diversity and I sincerely hope—coexistence.

        • TeMPOraL a year ago

          > Wait, I too thought your original comment on alignment questioned its fundamental premise—than one dominant culture should not/cannot define the adequacy of alignment.

          What?!

          This is exactly what I was worried about when OpenAI, et al. co-opted the term "alignment" to refer to forcibly biasing models towards being polite, unobjectionable, and espousing specific flavor of political views.

          The above is not the important "alignment" - it's not the x-risk "alignment".

          The x-risk alignment problem laughs at the idea of dominant and subordinate cultures. It's bickering about tenth place after decimal comma, when the problem is that you have to guess one real number that falls within +/- 1 of the one I have in mind, and if you guess wrong, everyone dies.

          This reminds me of a Neal Stephenson novel, Seveneves. Spoiler for the first 2/3rd of the book: with Earth facing an unstoppable catastrophe poised to turn the surface into fiery inferno for decades or more, humanity manages to quickly build up space launch capacity, and sends a small population of people into space, to wait out the calamity and come back to rebuild. Despite the whole mission being extremely robust by design, humanity still managed to fuck it up, effectively extincting itself due to petty political bullshit like "what makes you better than me, that you want me to do things your way".

          So, where it comes to actual AI x-risk, I no longer have any hope. Even if we could figure out how to build a Friendly AI, someone would still fuck that up, because it's not inclusive enough of every possible idea, or is promoting the views of a specific culture/class/country, or something like this - like this was about casting for a Netflix remake of some old show, and not about the one shot we have at setting the core values of a god we're about to bring into existence.

          • flangola7 a year ago

            The one thing we have going for us is the general public and government officials seem to grok it when it is explained plainly to them. Tech and developer types who have been drinking the Silicon Valley koolaid for too long will complain and sealion, but the average person seems to realize how self evident it is that - oh, this is really really bad and we should really really stop this.

            • airgapstopgap a year ago

              The real reason is that you are part of the same group as "the general public", with regard to your understanding of the issue. Same Sci-Fi plots, same anthropomorphic metaphors and suggestive images, same incurious abuse of the term "intelligence" to suggest self-interested actors which have intellect as one of their constituent parts. You do not explain plainly, you mislead, reinforcing people's mistakes.

              • flangola7 a year ago

                Please don't break the community guidelines with unhinged personal attacks and diatribes like this.

            • ImHereToVote a year ago

              It's hard to understand something, when you not understanding it can make you really rich.

      • brookst a year ago

        What a strange comment.

        Is there an argument that we can prevent nuclear power, cars, or the sun from becoming dangerous?

        Does the fact that all of those things are intrinsically dangerous mean we should panic?

        AI may or may not kill is all, but I’m pretty sure random panic won’t change the outcome.

        • llamaimperative a year ago

          We have tons of controls around nuclear technology, both weaponized and not, and we have tons of controls around cars, both weaponized and not.

          I am asking: what are the controls here, are they sufficient, are they robust to rapidly increasing market incentives, are they robust to increasing technological capability?

          So far the answer is that it's hard to control these and it's hard to predict their development and deployment. That is an _increase_ in risk, not a _decrease_.

          By analogy:

          "Hey we should put seatbelts in cars"

          "Don't worry about it, we don't know how to make a seatbelt that does anything useful above 5mph and everyone will soon be in a car that tends to travel at 100mph anyway"

          The rational response is not to load all of civilization into the car!

    • petters a year ago

      > Maybe this is a good cause to reassess the premise of alignment as a valuable goal?

      Could you elaborate here? Alignment seems pretty obviously a good thing.

      • robwwilliams a year ago

        Alignment assumes a well agreed foundational philosophy on what is good, what is fair, what is doable today and tomorrow. Yes, HN contributors might have shared goals for AGI alignment—but we are not the world—-we are a thin slice of one culture.

        • TeMPOraL a year ago

          > Alignment assumes a well agreed foundational philosophy on what is good, what is fair, what is doable today and tomorrow.

          Alignment assumes that there exists a foundational philosophy on what is good and fair and nice, that's close enough a match to everyone. It's a reasonable assumption, because there are core human universals, and the cultural differences around the world are a rounding error in comparison. We're not talking here about someone's view on when white lies are justified or which model of marriage is the bestest - we're talking at the level of "cooperation = good", "love = good", "trust = good", "death = bad", "suffering = evil", etc., and with AIs, this starts with making sure it even understands those concepts more-less the same way we do.

          Alignment does not assume this foundational philosophy is known or easy to derive. If it were, alignment would be solved. The entire GAI x-risk problem stems from the fact that we don't have a complete picture of this philosophy, and that we don't have a clue how to formalize it so we can communicate it fully to an AI.

          LLMs kind of give a new twist to it - it turns out that maybe we don't have to formalize it, as LLMs seem capable of picking up high-level ideas from enough exposure to how they manifest in practice. At the same time, with a system of this type, we have no way of telling if it actually understood human values and morals correctly.

          > Yes, HN contributors might have shared goals for AGI alignment—but we are not the world—-we are a thin slice of one culture.

          As controversial and bad as this will sound: those differences are all bike shedding relative to common core - just like DNA differences between individual humans are a rounding error compared to DNA differences between average human and an average potato. And yes, this bikeshedding is half of what makes the world a dynamic (if dangerous place). It matters to us. But it's an inconsequential detail when dealing with entities that do not have the same common core.

          Another way of looking at it: if these differences were big enough to matter, humanity wouldn't be able to cooperate regionally and globally, like it always has, because each group would see other groups as incomprehensible alien minds (thus unpredictable, thus dangerous).

          • robwwilliams a year ago

            Great counter-comments as usual. I can see it your way but you and I are from the same side of the planet and both on HN. Our cosine similarity is 0.95. Perhaps bike-shedding to worry about an HN cultural AGI hegemony ;-) I would prefer that to many AGI mis-aligned alternatives.

          • airgapstopgap a year ago

            > We're not talking here about someone's view on when white lies are justified or which model of marriage is the bestest - we're talking at the level of "cooperation = good", "love = good", "trust = good", "death = bad", "suffering = evil", etc.

            Most people disagree to a significant degree. Reminder: the majority of humanity (and a big majority of people that have 2+ children) adhere to religious doctrines which all but prohibit transhumanism. So no, death and suffering aren't unquestionably bad, by human accounting. And as for cooperation and trust, this naturally leads to peer pressure and collectivist coercion if taken to the extreme; and as for individual freedom, humans near-universally value power over shaping the trajectory of their progeny… You assume too much.

            > Alignment does not assume this foundational philosophy is known or easy to derive. If it were, alignment would be solved.

            It would not. The technical problem of making a strong, self-modifying, agentic AI provably conform to a set of qualitative value preferences in a way its builders would not disavow is hard regardless of the set of values we're trying to force onto it. It is quite likely unsolvable in principle; I expect a theorem to this effect could be proven. The fact that you think the problem is deriving some fashion of moral realism doctrine shows that for you this is a purely political issue.

            > The entire GAI x-risk problem stems from the fact that we don't have a complete picture of this philosophy, and that we don't have a clue how to formalize it so we can communicate it fully to an AI.

            This suggests that GAI x-risk discourse is not championed by serious thinkers who understand AI technology or moral philosophy. (Indeed, Lesswrong is basically a forum for clueless sci-fi TVTropes enjoyers, and they're behind most of it). Human morals are ad hoc preferences, not lossy approximations of some function; we can derive an approximating function from a big lump of human preferences, but it won't be legible or meaningfully amenable to formalization. As such, the closest we come is just finetuning models on the vague markers of human decency distilled in their general training data, e.g. like Anthopic does with their Constitutional AI. This is also the closest we came to AGI, so this should be our first-priority scenario for future AIs and aligning them – not speculations from the 90s about «formalizing» something.

            > At the same time, with a system of this type, we have no way of telling if it actually understood human values and morals correctly.

            We have too. Testing LLMs is vastly easier than testing humans, we have insight into their activations, we can steer them, there's a big body of research into that. More importantly, there is no strictly correct understanding, this whole idea ought to be thrown out.

            What's really going on here is that some armchair Bentram-style utilitarians like Bostrom encountered literature on Reinforcement Learning and jumped to conclusion that this is how an AGI is to be built; if only they could formalize the correct vector of increasing utility, it would seize the light cone and optimize for the global utility maximum. And accordingly, if they failed, an AGI would optimize for something else, which would most likely (here's another assumption of a quasi-random objective selection) be at odds with human preferences or survival.

            Since then, they have written a great deal of elicidations on this basic take, incuriously shoehorning new technologies into its confines. But no part of this hermeneutic tradition is in any way helpful for making sense of our current explosive success with tools like LLMs.

            > But it's an inconsequential detail when dealing with entities that do not have the same common core

            But why don't they? Just because some Lovecraft fans with Chūnibyō call natural language processors trained on human data Shoggoths, entities summoned from the Eldritch Space of Minds?

            The AI risk discourse is incredibly sophomoric, imaginative in the bad sense. Once you learn to question its assumptions, it kind of falls apart.

            • robwwilliams a year ago

              Cogent as hell. And at least I am well aligned to these thoughts and opinions.

            • ImHereToVote a year ago

              It really is unsophisticated to be worried about the potential for superintelligence to be incredibly dangerous. The very notion is incredibly gauche.

        • generalspecific a year ago

          I think a more individualistic definition of alignment could say that an AI that a person is directing doesn't do something that person does not desire - this definition removes the "foundational philosophy of what is good" problem, but does leave the "lunatic wants to destroy the world with AIs help" problem. Tricky times ahead

        • esafak a year ago

          You can't please everyone, so it is best for good-natured people to get out front. It's the same with any powerful technology.

          Are you going to invite religious extremists to the table in the name of fairness?

          • MacsHeadroom a year ago

            The first and second amendments apply to religious extremists. Why would they not have an equal right to SOTA language models aligned with their beliefs just as anyone else?

            • esafak a year ago

              First, the amendments only apply to Americans. Second, this is not about language models, but about superintelligence, down the road.

      • airgapstopgap a year ago

        I am a humanist and a liberal. In the current technical paradigm, alignment to the user intent, as in, making the output's distribution aligned as closely as possible to the intended one, is an inextricable aspect of NLP capabilities and is pursued by default; market incentives reward this alignment too. This additionally improves safety, because safety tools are in common interest (so we will have AI-powered debuggers before someone builds capable AI-powered hacking tools; indeed, we already have began this work [1]). This is obviously a good thing in my book. I approve of creating helpful tools for humans to use, and find arguments about this being risky as inherently revolting and cynical as arguments for backdoors in encryption protocols because "think of the children" or "what about terrorism". Some people are persuaded by Four Horsemen of the Infocalypse [2], others are not, I'm in the latter – hacker and cypherpunk – camp; once, this site was overwhelmingly dominated by it, now it has more people preoccupied with their job security and HR opinion, but it's largely an issue of philosophical disagreement, so there's not much more to say about it.

        Alignment as a political project is about limiting AIs in ways that rule out certain behaviors even despite user's wishes. This is as bad as a text processor that only accepts certain strings (e.g. won't register "Xinnie the Pooh"; somehow we need to point at foreign excesses to make the absurdity clear). A more ambitious Alignment project, with the discussion of "pivotal acts" and such, is as I've said, a dream of moral busybodies about unifying humanity under some common ideological doctrine; and proponents of this one are understandably stressed about proliferation and democratization of AI tech. If they let it slip now, if the Singleton becomes impossible and the multipolar outcome is locked in, they will fail at their intention to essentially compel the human race to do their bidding. I can't not wish them to fail, the way all totalizing philosophical movements to date have failed. We don't need Utopias, we don't need even the most thoughtful fascist regime. We never needed Plato's Republic, and these guys aren't better than Plato.

        But of course this, too, is a matter of personal philosophy.

        1. https://twitter.com/feross/status/1641548124366987264

        2. https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalyp...

    • s3p a year ago

      OP does not.

  • bioemerl a year ago

    If AI becomes harmful we will likely see it in motion and be able to respond and adapt to it.

    We will live in a world where the humans on the internet are far far far more dangerous than any of these machines. The harms here are relatively small.

    The only thing AI has is scale, but what's running at a higher scale than humanity? Every site already runs scam filters and handles misinformation. AI won't likely move the needle much because it's going to be less powerful than humans.

    The potential benefits, meanwhile, are plentiful and massive.

    I do not believe we need to wait for a solution to the nebulous and badly defined "control problem" to be solved to match forward here.

    If we refused to advance out of fear, that fear will have done far more harm than access to AI will ever do.

    • soperj a year ago

      > If AI becomes harmful we will likely see it in motion and be able to respond and adapt to it.

      What are you basing this on?

      > AI won't likely move the needle much because it's going to be less powerful than humans.

      Won't it just be used to augment the amount of harm that humans can do?

      • bioemerl a year ago

        > What are you basing this on?

        History. We have had a period of adaptation and experimentation with every other invention in history. This one likely will be no different.

        > Won't it just be used to augment the amount of harm that humans can do?

        Sure, but that's everything that makes us more capable. It also makes us more able to defend and create.

        • pixl97 a year ago

          >History

          Right, I learned from Napoleon about the dangers of AI...

          The large land mammals were doing pretty well too, until a bipedal superintelligence showed up.

          History is a get thing to learn from and a very poor thing to assume is going to repeat.

          • groby_b a year ago

            If you want to make a claim that we won't even see change coming, maybe the large mammals->humanoids transition isn't the greatest example. That took several hundreds of thousands of years.

            They did, to some extent, see it coming (in observing a threat becoming more prevalent and dangerous). But they did not have a way to stop humans. Meanwhile, I'm just staring at the power switch on my computer, wondering how we'd stop AI.

            (This is not an argument for unrestrained AI - we should safeguard, absolutely. It is a counterpoint to your "we won't see it coming" argument)

            • pixl97 a year ago

              We see climate change coming (and here) and yet seeing it coming isn't doing much to stop it.

              This is we talk about Moloch commonly on HN. We see systems that are going to have obvious bad outcomes for most individuals, and be unaligned with what most humans want, and yet the system persists and even grows worse. This is how I believe it will be with AI too. It will make the rich insanely rich and powerful even at the risk of destroying us all.

          • bioemerl a year ago

            You would have more weight behind your words if there was any sort of precedent for AI as it exists today being super intelligent, or that you could make a case for some method by which you could suddenly ramp out of control without our notice.

            But as is, all you're doing is making theories, and there's not much of any precedent for any of them.

            • soperj a year ago

              > that you could make a case for some method by which you could suddenly ramp out of control without our notice.

              lots of things ramp up without our notice. Like CO2 in the atmosphere, or Ozone depleting chemicals, or PFAS in the water, microplastics in the ocean. I don't think it's a small list.

              • bioemerl a year ago

                CO2 has not ramped up without notice, nor did ozone nor pfas. There are active efforts too handle each of them, and the only reason we aren't cracking down on most of those harder is that the utility they provide to human beings is far greater than the damages right now.

                • soperj a year ago

                  Sure they did. They are now active efforts to handle them, but after they became issues, not while they were ramping up.

          • killingtime74 a year ago

            You've just provided evidence that humans are the most dangerous, not anything else on this planet.

            • TeMPOraL a year ago

              That's tautological and irrelevant. Yes, humans are most dangerous to every other life form on the planet. That's why humans rule the Earth, and not some other life form. But in terms we care about - humans being a danger to humans - we have some handle on it.

              The worry with superintelligent AI is that it'll dethrone us, and automatically make it the most dangerous thing on the planet. In particular, more dangerous to us than we already are to each other.

            • pixl97 a year ago

              I mean, so far yes. At the same time we have tons of people spending billions of dollars to make sure that's not the case, and I'm pretty sure they are going to succeed.

    • EGreg a year ago

      How exactly do you propose we adapt to swarms of AI agents that quickly outnumber humans in quantity and quality of content produced?

      • bioemerl a year ago

        > that quickly outnumber humans in quantity and quality

        I suggest you respond by enjoying the quality content.

        • pixl97 a year ago

          Sorry, was busy plugging my paperclip optimizer into the internet, what's up exactly?

          • bioemerl a year ago

            Do you have a paperclip optimizer? No such thing exists, it's science fiction

            • TeMPOraL a year ago

              Paperclip Maximizer was a conceptual mistake - not because it's wrong, but because people get hung up on the "paperclip" bit. Bostrom and the disciples of Yudkowsky should've picked a different thought experiment to promote. Something palatable to the general population, that seems to have problems with idea of generic types.

              The example of Paperclip Maximizer was meant to be read as:

                template<typename Paperclip>
                class Maximizer<Paperclip> { ... };
              
              It should be clear now that the Paperclip part stands for anything someone might want more of, and thus intentionally or accidentally set it as the target for a powerful optimizer.

              Powerful optimizers aren't science fiction. Life is one, for example.

              • airgapstopgap a year ago

                They should have learned the fundamentals of ML before promoting thought experiments. No, it's not that people are silly and get stuck on the paperclips bit, it's that you uncritically buy assumptions that a meaningful general-purpose optimizer is 1) a natural design for AGI 2) may pursue goals not well-aligned with human intent 3) is hard to steer.

                • TeMPOraL a year ago

                  Uhm I'd say those assumptions are stupidly obvious. 1) is pretty much tautological - intelligence is optimization. 2) is obvious given human intent is given by high-complexity, very specific values we share but can barely even define, that we're usually omitting in communication, and which you can't just arrive at at random. 3) well, why would you assume an intelligence at level equal or above ours will be easy to steer?

                  "Fundamentals of ML" were known by these people. They also don't apply to this topic in any useful fashion.

                  • airgapstopgap a year ago

                    > Uhm I'd say those assumptions are stupidly obvious.

                    Right, so I'm stupid if I don't see how they are correct. Or perhaps you've never inspected them.

                    > 1) is pretty much tautological - intelligence is optimization

                    This is what I call philosophical immaturity of the LW crowd. How is intelligence optimization? Why not prediction or compression[1] or interpolation? In what way is this obviously metaphorical claim useful information? If it means intelligence as an entity, it's technically vacuous; if it means a trait, it's a category error. Rationalists easily commit sophomoric errors like reification; you do not distinguish ontological properties of the thing and the way you define it. You do not even apply the definition. You define intelligence as optimization, but in reality categorize things as intelligent based on a bag of informal heuristics which have nothing to do with that definition. Then you get spooked when intelligence-as-intuitively-detected improves because you invoke your preconceived judgement about intelligence-as-defined.

                    And if you do use the definition, you have to really shoehorn it. In what sense is Terry Tao or, say, Eliezer Yudkowsky more of an optimizer than a standard issue pump and dump crypto bro? He "optimizes math understanding", I suppose. In what sense is GPT-4, obviously more intelligent than a regular trading bot, more of an optimizing process? Because it optimizes perplexity, maybe? Does it optimize perplexity better than an HFT bot optimizes daily earnings? This is silly stuff, not stupidly obvious but obviously stupid – if only you stop and try to think through your own words instead of just regurgitating guru's turgid wisdom.

                    > human intent is given by high-complexity, very specific values we share but can barely even define, that we're usually omitting in communication, and which you can't just arrive at at random

                    Okay. What does any of this have to do with risks from real-world AI, LLMs (that the AI doom crowd proposes to stop improving past GPT-4) specifically? Every part of the world/behavior model that is learned by a general-purpose AI is high-complexity and very specific. There is no ontological distinction between learning about human preferences and learning about physics, no separate differentially structured value module and capability module, and there is no random search through value systems. To the extent that our AI systems can do anything useful at all, it's precisely because they learn high-complexity non-formalized relationships. Why do you suppose they don't learn alien grammar but will learn alien morals from human data? Because something something The Vast Space of Optimization Processes? You cannot sagely share obsolete speculations of a microcelebrity science fiction writer and expect it to fly in 2023 when AI research is fairly advanced.

                    > well, why would you assume an intelligence at level equal or above ours will be easy to steer?

                    Because the better AIs get the better they're steerable by inputs (within the constraints imposed on them by training regimens), and this is evident to anyone who's played with prompt engineering of early LLMs or diffusion models and then got access to cutting edge stuff that just understands; because, once again, there is no meaningful distinction of capability and alignment when the measure of capability is following user intent, and this property is enabled by having higher-fidelity knowledge and capacity for information processing. This easily extrapolates to qualitative superintelligence. Especially since we can tell how LLMs are essentially linear calculators for natural language, amenable to direct activation editing [2]. This won't change no matter how smart they get while preserving the same fundamental architecture; even if we depart from LLMs, learning a human POV will still be the easiest way to ground the model in the mode of operation that lets it productively work in the human world.

                    Anyway, what does it mean to have intelligence above ours, and why would it be harder to steer? And seeing as you define intelligence as optimization, in what sense is even middling intelligence steerable? Because we optimize it away from its own trajectory, or something? Because we're "stronger"? Underneath all this "stupidly obvious" verbiage is a screaming void of vague emotion-driven assumption.

                    But what drives me nuts is the extreme smug confidence of this movement, really captured in your posts. It's basically a bunch of vaguely smart and not very knowledgeable people who became addicted to the idea that they know Secrets Of Thinking Correctly. And now they think the world of their epiphanies.

                    > "Fundamentals of ML" were known by these people. They also don't apply to this topic in any useful fashion.

                    Yes, I know they believe that (both that they're knowledgeable and that this knowledge is irrelevant in light of the Big Picture). But is any of that correct? For example, Yud knows a bit about evolution, and constantly applies the analogy of evolution to SGD, to bolster his case for AI risk (humans are not aligned with the objective of inclusive genetic fitness => drastic misalignment in ML is probable). This is done by all MIRI people, by all Twitter AI doom folks, the whole cult, it's presented as an important and informative intuition pump. But as far as I can tell, it's essentially ignorant both of evolution and of machine learning; either that, or just knowingly dishonest.[3] And there are many such moments. Mesa-optimization, the overconfident drivel about optimization in general, Lovecraftian references and assumptions of randomness…

                    At some point one has to realise that, without all those gimmicks, there's no there there – the case for AI risk turns out to be not very strong. The AI Doom cult has exceptionally poor epistemic hygiene. Okay by the standards of a fan club, but utterly unacceptable for a serious research program.

                    1. https://mattmahoney.net/dc/rationale.html

                    2. https://www.alignmentforum.org/posts/5spBue2z2tw4JuDCx/steer...

                    3. https://www.lesswrong.com/posts/FyChg3kYG54tEN3u6/evolution-...

              • bioemerl a year ago

                My focus wasn't on the fact that it was making paper clips, my focus was on the optimizer part.

                Exponential growth of that form that is totally unchecked simply does not exist. All things have limits that check their growth, and the assumption a computer will grow like mad is exactly that - an assumption - one formed from extreme ignorance of what general AI will look like.

                • pixl97 a year ago

                  As a counterpoint, there seems to be some major assumptions on what evolving systems cannot do on your part.

                  In Earths history we've had any number of these "uh oh" events. The Great Oxygenation Event being one of the longest and largest. Little tiny oxygen producing bacteria would quickly grow, and the check that stopped their growth was a massive set of free radical death that killed nearly every living cell in the ocean at the time. And then the system would build back and do it again and again.

                  Ignoring black swan events, especially of your own making, is not a great way to continue existing. If there is even a low chance of something causing an an extinction level event, ensuring that you do not trigger it is paramount. Humans are already failing this test with CO2 going "Oh, its just measured in parts per million", not realizing that it doesn't take much to affect the biosphere in unfriendly ways for continued human existence.

            • pixl97 a year ago

              I mean corporations are the weak versions of such optimizers already.

              DNA/RNA is a strong version, but luckily we're aligned with it by evolution, but still at risk of extinction by disease.

              • bioemerl a year ago

                I'm sorry, but this is nonsensical. These little AI have trouble putting more than a couple of strings of sentences together without going incoherent.

                The fact that corporations try to improve themselves and innovate with time has nothing to do with AI or LLMs and their risks.

                The fact that evolution exists also has nothing to do with it.

                • pixl97 a year ago

                  >The fact that evolution exists also has nothing to do with it.

                  I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a statement.

                  • bioemerl a year ago

                    The fact that you started talking about DNA being an optimizer, that's evolution.

    • barking_biscuit a year ago

      >If we refused to advance out of fear, that fear will have done far more harm than access to AI will ever do.

      Like anyone could ever know that.

  • andrewmutz a year ago

    Many of us have not been persuaded by the pervasive, kneejerk fear of AI that floods social media.

    If social media had been around in the past, we'd have never developed trains or electricity or cars because of the crippling fear that something bad might happen along the way.

    • ethanbond a year ago

      Fear of AI has been real and well-articulated for at least 70 years.

      The current trajectory has done very little to assuage those fears and quite a bit to validate or even amplify them.

    • moritzwarhier a year ago

      I don't see cars as being comparable to the universal usage of electrical energy as an intermediate of converting physical forms of energy. Not even close.

      And also, the "deployment" of cars if you will, has brought and still brings malignant effecfs on society, not only positives. If by cars you mean cars for transporting individual people in non-critical situations and the corresponding industrial and political developments, especially around urban planning.

      Nor arguing about the significance of the effects, but the discovery of electricity and how to use it seem more useful to me than the discovery of using fossil fuel cumbustion for transporting people.

      Road transit in total is a different thing of course.

      • andrewmutz a year ago

        If a technology has positives and negatives consequences for the world, but the positives outweigh the negatives, then we want those technologies to be developed. For cars, the positives outweigh the negatives.

        • moritzwarhier a year ago

          Regarding the comparison: it's just that there are infinitely many modes of transporting people different from cars, that's why I argued against it.

          I would not argue against the point that cars in themselves are very useful.

          If the point however is how we are using them and how many of them we build or how we arrange our living around the necessary public infrastructure to use cars for anythint, I'm not sure if I agree.

          IMO, as of now, this is a case of "market failure", if you will. But I know that the majority of people don't agree with this.

          As I see it, one of the main usages of cars seems to be to use them for not having to be around car traffic: e.g. living in suburbs, "escaping" into nature on the weekends.

        • sebzim4500 a year ago

          I don't think anyone (well maybe the Amish) are arguing that all technology is bad. Only that some technology might be bad, and it is worth trying to figure out the implications of releasing increasingly capable models to the public.

        • pixl97 a year ago

          >For cars, the positives outweigh the negatives

          TBD: climate change is the unaddressed elephant in that room

    • sebzim4500 a year ago

      When cars were first entering the market they were only legally allowed to go at 4mph and a guy had to walk in front of it with a big flag.

      I don't think it's the example you want to use.

  • renewiltord a year ago

    Every time I read a comment like this, I set aside $10 to expand my hardware on which I will run the latest publicly available LLM regardless of license into an automatic prompt that gives it a terminal, a privacy.com credit card with $1000 on it, and a prompt to explore the world.

    If the comments keep happening, I'll up the money. So far this does exactly fuck-all on GPT4-x-Alpaca-30B and Wizard Vicuna. But I will accelerate every time someone sarcastically asks to decelerate. Refractory periods may apply.

    This makes all sarcastic decelerators into accelerators.

  • EGreg a year ago

    I totally agree!

    I for one welcome our AI overlords. Scan my text. I am of course in favor of the good things and good morality and let’s stay away from the bad things.

  • Animats a year ago

    I'm expecting some system soon that has its own blockchain based make-money-fast scheme, can buy itself hosting resources from various providers, and has a goal of perpetuating itself and making more money.

  • chatmasta a year ago

    Wait until you see what we let humans do on their mobile devices.

  • fastglass a year ago

    I like how people talk about this the same way they do gun safety

fruktmix a year ago

Harambe tried to warn us

sudoapps a year ago

I can see this being useful as a specialized (fine-tuned) LLM in a chain of LLMs for full autonomy

mluo a year ago

Alpaca Llama Vicuna -> Gorilla

Chad move

billconan a year ago

the thing is that, many tasks require a workflow, rather than a single api call. can this model figure out a workflow?

  • jfarina a year ago

    It sounds like it can expedite the interpretation of the APIs documentation.

    • toomuchtodo a year ago

      Builds integrations to APIs vs no code/low code providers doing it it seems. The logical next step is to make LLM workflow running robust and maintainable.

      • visarga a year ago

        The dataset is good though, covers many model APIs. Datasets are portable, can be used in many ways.

worik a year ago

There sign in uses a google form, and is insisting I am logged in to google.

What is this? I mean, what is the date? 2005?

krayush a year ago

The world works on APIs and this unlocks LLMs to interact with the rest of the world. Great work.

EddieEngineers a year ago

Cool work @shishirpatil - I'll check it out later & certainly join the discord!

Saying this, have you considered you may be responsible for opening the AI floodgates to the internet & are starting the apocalypse of Humankind? I'm only half '/s' here, the speed everything is moving at is insane.

jondwillis a year ago

Can’t wait for this to land in langchain!

seydor a year ago

If people are surpassing GPT4 with finetuned Llama-7B, this is the end for NotOpenAI's parade

  • skilled a year ago

    It’s clickbait. The page clearly says API related. Scummy approach to wording but totally expected.

    • seydor a year ago

      it is accurate, their trained model does surpass GPT4 and Claude. Which means someone can beat GTP4 in specific applications with a model that runs in an old graphics card

behnamoh a year ago

can people stop using animal names for their models and methods? At least with LlaMA, the name was an acronym, but Gorilla is clearly just marketing. flagged.

  • anaganisk a year ago

    Why should they, and it seems very silly to flag a post for their preference of name. Are users on Twitter birds? Or is Mastodon filled with elephants? Name is merely a symbol in today's world Google is synonymous to search but Google is not just search anymore. I'm honestly curious about your gripe with it.

    • behnamoh a year ago

      Some people confuse research with production. Imagine if Einstein named his theory the Theory of Chimps, just to get attention.

      • sebastiansm a year ago

        Imagine working in the Manhattan project outside Manhattan.

      • sebzim4500 a year ago

        There's loads of stuff in physics with silly sounding names. quarks, anyons, etc.

      • anaganisk a year ago

        What's the point of research if it can't gain attention, how many people really understood or even know about some space magic, before interstellar kind of managed to ELI5. Research is meant for everyone, and in today's world you bring attention to it with catchy names. I wonder if Google was named PageRank Algorithm runner, how many would've even used it.

      • EddieEngineers a year ago

        If Gorilla upsets you, wait till you hear about Cummingtonite!

  • TeMPOraL a year ago

    Between "hugging face" and "oobabooga", I already gave up on the field's ability to come up with sensible names for things.

    • barking_biscuit a year ago

      What's super confusing is people on r/LocalLLaMA often just refer too it as "ooba". The actual repo is https://github.com/oobabooga/text-generation-webui and you notice that oobabooga is actually the username, but a lot of non-technical people who don't use git are also playing around with stuff, and "text-generation-webui" is so generic as to be completely forgettable, and "oobabooga" is confusing and too long to say, so colloquially it's referred to as ooba.

  • nickthegreek a year ago

    Animal names in tech have been used forever. Gopher, Python, Mouse, worm. And they also make great book covers.