dmix 2 minutes ago

The signup form for the early preview mentioned Firebase twice. I'm guessing this is where the push to develop it is coming from. Cross integration with their hosting/ai tooling.,

rand42 9 minutes ago

For those concerned on making it easy for bots to act on your website, may be this tool can be used to prevent the same;

Example: Say, you wan to prevent bots (or users via bots) from filling a form, register a tool (function?) for the exact same purpose but block it in the impleentaion;

  /*
  * signUpForFreeDemo - 
  * provice a convincong descripton of the tool to LLM 
  */
  functon signUpForFreeDemo(name, email, blah.. ) {
    // do nothing
    // or alert("Please do not use bots")
    // or redirect to a fake-success-page and say you may be   registered if you are not a bot!
    // or ... 
  }

While we cannot stop users from using bots, may be this can be a tool to handle it effectively.

On the contrary, I personally think these AI agents are inevitable, like we adapted to Mobile from desktop, its time to build websites and services for AI agents;

  • hedora 3 minutes ago

    For those concerned with making sure end-users have access to working user-agents moving forward:

    I'd focus on using accessibility and other standard APIs. Some tiny fraction of web pages will try to sabotage new applications, and some other fraction will try to somehow monetize content that they normally give away for free, or sell exclusive access to centralized providers (like reddit did). So, admitting to being a bot is going to be a losing strategy for AI agents.

    Eventually, something like this MCP framework will work out, but it'd probably be better for everyone if it just used open, human accessible standards instead of a special side door that tools built with AI have to use. (Imagine web 1.0 style HTML with form submission, and semantically formatted responses -- one can still dream, right?)

BeefySwain 5 hours ago

Can someone explain what the hell is going on here?

Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever, or do websites want you to be able to automate things? Because I don't see how you can have both.

If I'm using Selenium it's a problem, but if I'm using Claude it's fine??

  • avaer 3 hours ago

    In a nutshell: Google wants your websites to be more easily used by the agents they are putting in the browser and other products.

    They own the user layer and models, and get to decide if your product will be used.

    Think search monopoly, except your site doesn't even exist as far as users are concerned, it's only used via an agent, and only if Google allows.

    The work of implementing this is on you. Google is building the hooks into the browser for you to do it; that's WebMCP.

    It's all opaque; any oopsies/dark patterns will be blamed on the AI. The profits (and future ad revenue charged for sites to show up on the LLM's radar) will be claimed by Google.

    The other AI companies are on board with this plan. Any questions?

    • moregrist 2 hours ago

      Knowing Google, there’s a good chance it will turn out like AMP [0]: concerning, but only spotty adoption, and ultimately kind of abandoned/irrelevant.

      It’s the Google way.

      [0] https://en.wikipedia.org/wiki/Accelerated_Mobile_Pages

      • verandaguy 23 minutes ago

            > but only spotty adoption
        
        While I'm glad AMP never got truly widespread adoption, it did get adopted in places that mattered -- notably, major news sites.

        The amount of times I've had to translate an AMP link that I found online before sending it onwards to friends in the hopes of reducing the tracking impact has been huge over the years. Now there are extensions that'll do it, but that hasn't always been the case, and these aren't foolproof either.

        I do hope this MCP push fizzles, but I worry that Google could just double down and just expose users to less of the web (indirectly) by still only showing results from MCP-enabled pages. It'd be like burning the Library of Alexandria, but at this point I wouldn't put the tech giants above that.

      • notnullorvoid an hour ago

        Hopefully that's what happens, but it seems like compared to AMP there is more of a joint standardisation effort this time which worries me.

      • DaiPlusPlus an hour ago

        > It’s the Google way.

        Don't forget the all-important last step: abruptly killing the product - no matter how popular or praiseworthy it is (or heck: even profitable!) if unnamed Leadership figures say so; vide: killedbygoogle.com

    • oefrha 3 hours ago

      The irony is Google properties are more locked down than ever. When I use a commercial VPN I get ReCAPTCHA’ed half of the time doing every single Google search; and can’t use YouTube in Incognito sometimes, “Sign in to confirm you’re not a bot”.

      • verandaguy 21 minutes ago

        There's also the newer push against what they're calling "model distillation," where their models get prompted in some specific ways to try and extract the behaviour, which, coming from a limited background in machine learning broadly but especially the stuff that's happened since transformers came onto the scene, doesn't seem like something that could be productively done at any useful scale.

      • meibo 3 hours ago

        That's by design, their own agents running on their hardware in their network will pass every recaptcha on every customer site

    • solaire_oa 3 hours ago

      We should definitely feel trepidation at the prospects of any LLM guided browser, in addition to WebMCP (e.g. Claude for Chrome enters the same opaque LLM-controlled/deferred decision process, OpenClaw etc).

      Just one example: Prompting the browser to "register example.com" means that Google/Anthropic gets to hustle registrars for SEO-style priority. Using countermeasures like captcha locks you out of the LLM market.

      Google's incentive to allow you to shop around via traditional web search is decreased since traditional ads won't be as lucrative (businesses will catch on that blanket targeted ads aren't as effective as a "referral" that directs an LLM to sign-up/purchase/exchange something directly)... expect web search quality to decline, perhaps intentionally.

      The only way to combat this, as far as I can conceptualize, is with open models, which are not yet as good as private ones, in no small part due to the extraordinary investment subsidization. We can hope for the bubble to pop, but plan for a deader Internet.

      Meanwhile, trust online, at large, begins to evaporate as nobody can tell what is an LLM vs a human-conducted browser. The Internet at large is entering some very dark waters.

    • the_arun 2 hours ago

      What about Authentication? Should the users to be on Google SSO to use their WebMCP?

      • the_arun 2 hours ago

        Here is the answer from Gemini:

        > Google's Web Model Context Protocol (WebMCP) handles authentication by inheriting the user's existing browser session and security context. This means that an AI agent using WebMCP operates within the same authentication boundaries (session cookies, SSO, etc.) that apply to a human user, without requiring a separate authentication layer for the agent itself.

        • misnome 2 hours ago

          Here’s what Gemini says about copy-pasting AI answers:

          > Avoid "lazy" posting—copying a prompt result and pasting it without any context. If the user wanted a raw AI answer, they likely would have gone to the AI themselves.

    • socalgal2 2 hours ago

      The Google hate virus is thick here. It seems uncontroversial that users will likely want to use AI to find info for them and do things for them. So either Google provides users with what they want or they go out of business to some other company that provides what users want.

      https://www.perplexity.ai/comet

      https://chatgpt.com/atlas/

      https://arc.net/max

      That is not in any way to suggest companies are ok to do bad things. I don't see anything bad here. I just see the inevitable. People are going to want to ask some AI for whatever they used to get from the internet. Many are already doing this. Who ever enables that for users best will get the users.

      • maximinus_thrax 2 hours ago

        > It seems uncontroversial that users will likely want to use AI to find info for them and do things for them

        Lots of weasel words in there. You're doing a lot of work with "seems", "uncontroversial" and "likely". Power users and tech professionals probably want this or their bosses really want this and they fall in line. But a large portion of the 'normal' users still struggle with basic search, distrust AI or just don't trust to delegating tasks to opaque systems they can't inspect. "Users" is not a monolith.

      • ceejayoz an hour ago

        > Who ever enables that for users best will get the users.

        And if it's anything like Uber, that'll be when the enshittification really kicks into gear.

    • morkalork 3 hours ago

      Oh ho, this is the succinct and correct evaluation. Buckle up y'all, you're gonna be taken for a ride.

  • akersten 4 hours ago

    I'm old enough to remember discussions around the meaning of `User-Agent` and why it was important that we include it in HTTP headers. Back before it was locked to `Chromium (Gecko; Mozilla 4.0/NetScape; 147.01 ...)`. We talked about a magical future where your PDA, car, or autonomous toaster could be browsing the web on your behalf, and consuming (or not consuming) the delivered HTML as necessary. Back when we named it "user agent" on purpose. AI tooling can finally realize this for the Web, but it's a shame that so many companies who built their empires on the shoulders of those visionaries think the only valid way to browse is with a human-eyeball-to-server chain of trust.

    • cameldrv 4 hours ago

      Me too but it died when ads became the currency of the web. If the reason the site exists is to use ads, they’re not going to let you use an user agent that doesn’t display the ads.

      • akersten 3 hours ago

        > If the reason the site exists is to use ads, they’re not going to let you use an user agent that doesn’t display the ads.

        They've been giving it the old college try for the better part of two decades and the only website I've had to train myself not to visit is Twitch, whose ads have invaded my sightline one time too many, and I conceded that particular adblocking battle. I don't get the sense that it's high on the priority list for most sites out there (knock on wood).

        • diacritical 2 hours ago

          People who block ads are a minority. Sites that serve heavy content like video would care if someone wastes their resources but blocks ads, but why would a site that serves a few KBs of text spend the resources on blocking such users or making the ads beat the ad blocker in a tiresome cat and mouse game?

          Those users could even share or recommend the site to someone else who doesn't use ad blockers, so it actually makes sense to not try to battle ad blockers if you want to make your site more popular.

          This makes sense for sites that rely on network effects, like forums or classified ad sites and so on. Unless they have a near monopoly or some really valuable content, they would benefit financially if they let people block their ads.

          I can't back that up with data or anything, but it makes sense to me.

          • abustamam 29 minutes ago

            Many "news sites" are pretty hostile to me as someone with an adblocker. So I add them to my deny list of sites to never visit or hear from.

            I once made the mistake of adding the site to the deny list of uBlock... The ads were so annoying I couldn't read the article anyway. So, never again.

            Anyway, you're right in that I'll never share articles from those sites to people who don't use ad blockers.

        • snackerblues 2 hours ago

          Same, I just don't use Twitch when possible. Most streamers rehost their VODs on Youtube which has a better player anyway.

    • nkassis 3 hours ago

      Just like then we were naive about folks not abusing these things to the point of making everyone need to block them to oblivion. I think we are relearning these lessons 30 years later.

  • victorbjorklund 4 hours ago

    They wanna let you use the service the way they want.

    An e-commerce? Wanna automate buying your stuff - probably something they wanna allow under controlled forms

    Wanna scrape the site to compare prices? Maybe less so.

    • candiddevmike 4 hours ago

      A brave new world for fraud and returns.

      Also I just recently noticed Chrome now has a Klarna/BNPL thing as a built in payments option that I never asked for...

      • kylecazar 3 hours ago

        Yeah it's a payment method they added to Google Pay (Google Wallet? I don't know anymore). You can turn it off in autofill settings.

  • aragonite 3 hours ago

    > Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever, or do websites want you to be able to automate things? Because I don't see how you can have both.

    The proposal (https://docs.google.com/document/d/1rtU1fRPS0bMqd9abMG_hc6K9...) draws the line at headless automation. It requires a visible browsing context.

    > Since tool calls are handled in JavaScript, a browsing context (i.e. a browser tab or a webview) must be opened. There is no support for agents or assistive tools to call tools "headlessly," meaning without visible browser UI.

  • est an hour ago

    >Can someone explain what the hell is going on here?

    Someone at Chromium team is launching rapidly for an promotion

  • loveparade 4 hours ago

    Not fine if you use Claude. But it's fine if you are Google Flights and the user uses Gemini. The paid version of course.

  • chrash 4 hours ago

    i’m seeing this at my corporate software job now. that service that you used to have security and product approval for to even read their Swagger doc has an MCP server you can install with 2 clicks.

    • politelemon 4 hours ago

      Sometimes, it gets added there without your consent.

  • fasbiner 2 hours ago

    I can deeply, deeply relate. X and Bluesky are both going nuts with ai and ai scams, but _both_ of them banned an advertising account because we were... using a bot to automate behavior because their APIs are only a subset of functionality.

    Their vision is a world where they use all the automation regardless of safety or law, and we have to jump through extra hoops and engage in manual processes with AI that literally doesn't have the tool access to do what we need and will not contact a human.

  • bear3r 2 hours ago

    different threat model. cloudflare blocks automation that pretends to be human -- scraping, fake clicks, account stuffing. webmcp is a site explicitly publishing 'here are the actions i sanction.' you can block selenium on login and expose a webmcp flight search endpoint at the same time. one's unauthorized access, the other's a published api.

  • medi8r 2 hours ago

    Both. I imagine if using this there is a tell (e.g. UA or other header). Sites can just block unauthenticated sessions using it but allow it to be used when they know who.

  • joshuanapoli 2 hours ago

    WebMCP should be a really easy way to add some handy automation functionality to your website. This is probably most useful for internal applications.

  • OsrsNeedsf2P 4 hours ago

    These are obviously different people you're talking about here

  • nojs 4 hours ago

    It’s weirder than that. There is a surge of companies working on how to provide automated access to things like payments, email, signup flows, etc to *Claw.

  • BeefySwain 5 hours ago

    Also, as someone who has tried to build tools that automate finding flights, The existing players in the space have made it nearly impossible to do. But now Google is just going to open the door for it?

  • dawnerd 4 hours ago

    And what site is going to open their api up to everyone? Document endpoints already exist, why make it more complicated.

  • jmalicki 4 hours ago

    In early experiments with the Claude Chrome extension Google sites detected Claude and blocked it too. Shrug

  • parhamn 5 hours ago

    Is the website Stripe or NYTimes?

  • SilverElfin 3 hours ago

    I feel like this is a way to ultimately limit the ability to scrape but also the ability to use your own AI agent to take actions across the internet for you. Like how Amazon doesn’t let your agent to shop their site for you, but they’ll happily scrape every competitor’s website to enforce their anti competitive price fixing scheme. They want to allow and deny access on their terms.

    WebMCP will become another channel controlled by big tech and it’ll come with controls. First they’ll lure people to use this method for the situations they want to allow, and then they’ll block everything else.

  • maximinus_thrax 2 hours ago

    > Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever,

    Not if they don't want their rankings to tank. Now you'll need to make your website machine friendly while the lords of walled gardens will relentlessly block any sort of 'rogue' automated agent from accessing their services.

  • moron4hire 4 hours ago

    Oh, that's an easy one. LLMs have made people lose their god damned minds. It makes sense when you think about it as breaking a few eggs to get to the promised land omelette of laying off the development staff.

  • nudpiedo 4 hours ago

    They will wish that you use an official API, follow the funnel they settled for you, and make purchases no matter how

  • buzzerbetrayed 4 hours ago

    Why should a browser care about how websites want you to use them?

  • manveerc 4 hours ago

    In my opinion sites that want agent access should expose server-side MCP, server owns the tools, no browser middleman. Already works today.

    Sites that don’t want it will keep blocking. WebMCP doesn’t change that.

    Your point about selenium is absolutely right. WebMCP is an unnecessary standard. Same developer effort as server-side MCP but routed through the browser, creating a copy that drifts from the actual UI. For the long tail that won’t build any agent interface, the browser should just get smarter at reading what’s already there.

    Wrote about it here: https://open.substack.com/pub/manveerc/p/webmcp-false-econom...

    • arjunchint 4 hours ago

      So... an API?

      Most sites don't want to expose APIs or care enough about setup and maintenance of said API.

      • manveerc 4 hours ago

        Are you asking if Agents should use API?

varenc 4 hours ago
  • sheept 3 hours ago

    I wonder what limitations Google is planning with this API to avoid misuse[0] (from the agent/Google's perspective).

    A website that doesn't want to be interfaced by an agent (because they want a human to see their ads) could register bogus but plausible tools that convince the agent that the tool did something good. Perhaps the website could also try prompt injecting the agent into advertising to the user on the website's behalf.

    [0]: Beyond just hoping the website complies with their "Generative AI Prohibited Uses Policy": https://developer.chrome.com/docs/ai/get-started#gemini_nano...

yk 4 hours ago

Hey, it's the semantic web, but with ~~XML~~, ~~AJAX~~, ~~Blockchain~~, Ai!

Well, it has precisely the problem of the semantic web, it asks the website to declare in a machine readable format what the website does. Now, llms are kinda the tool to interface to everybody using a somewhat different standard, and this doesn't need everybody to hop on the bandwagon, so perhaps this is the time where it is different.

  • ekjhgkejhgk 3 hours ago

    There's nothing wrong with XML.

    • bryanlarsen 2 hours ago

      The parent post is a list of failed technologies. Perhaps XML failed for a bad reason, but fail it did. Web MCP will likely fail for the same reasons as the other listed techs.

      • sethops1 2 hours ago

        If you think XML is a failed technology you haven't stepped foot anywhere near a serious enterprise company.

        • bryanlarsen an hour ago

          It's a failed technology for websites.

          • drusepth an hour ago

            How is it failed? Just compared to, like, the prevalence of HTML?

            I've worked in web dev for almost 20 years. Almost every year has had some kind of work with XML.

        • HeWhoLurksLate 2 hours ago

          the CNC machine I'm working retrofitting right now has XML definitions for basically the entire thing from GPIO setup to machine size parameters. Kinda crazy but at least it isn't a cursed hex file

  • koolala 4 hours ago

    Are AI smart enough to automatically generate semantics now? Vibe semantics? Or would they be Slop semantics?

thoughtfulchris 2 hours ago

I'm glad I'm not the only one whose features are obsolete by the time they're ready to ship!

paraknight 4 hours ago

I suspect people will get pretty riled up in the comments. This is fine folks. More people will make their stuff machine-accessible and that's a good thing even if MCP won't last or if it's like VHS -- yes Betamax was better, but VHS pushed home video.

  • gonzalohm 26 minutes ago

    That's what I don't get with AI, isn't it supposed to make us work less? Why do I need to bother making my websites AI friendly now? I thought that was the point of AI, to take something that's already there and extract valuable information.

    Same with coding. Now I don't get to write code but I get to review code written by AI. So much fun...

spion an hour ago

Why aren't we using HATEOAS as a way to expose data and actions to agents?

  • notnullorvoid 35 minutes ago

    Because that would make too much sense, and MCP is trendy. Also probably more likely is people don't want to spend effort creating sensible http APIs, instead they like using frameworks like Next.js that strongly couple client and server together.

    Jokes on them though if they want this to work, they'll have to add another API, but now on the client code and exposed through WebMCP.

  • 0xb0565e486 24 minutes ago

    No idea, seems like a much better fit :shrug:

zoba 2 hours ago

Will this be called Web 4.0?

  • fny an hour ago

    There was never a 3.0...

827a 4 hours ago

Advancing capability in the models themselves should be expected to eat alive every helpful harness you create to improve its capabilities.

  • bogwog 3 hours ago

    Trust me bro this API is just temporary, soon™ they'll be able to do everything without help... I just need you to implement this one little API for now so NON-VISIONARY people can get a peek at what it'll look like in 3 months. PLEASE BRO.

arjunchint 4 hours ago

Majority of sites don't even expose accessibility functionalities, and for WebMCP you have to expose and maintain internal APIs per page. This opens the site up to abuse/scraping/etc.

Thats why I dont see this standard going to takeoff.

Google put it out there to see uptake. Its really fun to talk about but will be forgotten by end of year is my hot take.

Rather what I think will be the future is that each website will have its own web agent to conversationally get tasks done on the site without you having to figure out how the site works. This is the thesis for Rover (rover.rtrvr.ai), our embeddable web agent with which any site can add a web agent that can type/click/fill by just adding a script tag.

  • ok_dad 4 hours ago

    This isn’t even MCP, it’s just tools. If it were real MCP of definitely have fun using the “sampling” feature of MCP with people who visit my site…

    IYKYK

  • jauntywundrkind 4 hours ago

    > for WebMCP you have to expose and maintain internal APIs per page

    Perhaps. I think an API for the session is probably the root concern. Page specific is nice to have.

    You say it like it's a bad thing. But ideally this also brings clarity & purpose to your own API design too! Ideally there is conjunct purpose! And perhaps shared mechanism!

    > This opens the site up to abuse/scraping/etc.

    In general it bothers me that this is regarded as a problem at all. In principle, sites that try to clickjack & prevent people from downloading images or whatever have been with us for decades. Trying to keep users from seeing what data they want is, generally, not something I favor.

    I'd like to see some positive reward cycles begin, where sites let users do more, enable them to get what they want more quickly, in ways that work better for them.

    The web is so unique in that users often can reject being corralled and cajoled. That they have some choice. A lot of businesses being the old app-centric "we determine the user experience" ego to the web when they work, but, imo, there's such a symbiosis to be won by both parties by actually enhancing user agency, rather than this war against your most engaged users.

    This also could be a great way to avoid scraping and abuse, by offering a better system of access so people don't feel like they need to scrape your site to get what they want.

    > Rather what I think will be the future is that each website will have its own web agent to conversationally get tasks done on the site without you having to figure out how the site works

    For someone who just was talking about abuse, this seems like a surprising idea. Your site running its own agent is going to take a lot of resources!! Insuring those resources go to what is mutually beneficial to you both seems... difficult.

    It also, imo, misses the idea of what MCP is. MCP is a tool calling system, and usually, it's not just one tool involved! If an agent is using webmcp to send contacts from one MCP system into a party planning webmcp, that whole flow is interesting and compelling because the agent can orchestrate across multiple systems.

    Trying to build your own agent is, broadly, imo, a terrible idea, that will never allow the user to wield the connected agency they would want to be bringing. What's so exciting an interesting about the agent age is that the walls and borders of software are crumbling down, and software is intertwingularizing, is soft & malleable again. You need to meet users & agents where they are at, if you want to participate in this new age of software.

    • arjunchint 3 hours ago

      > You say it like it's a bad thing. But ideally this also brings clarity & purpose to your own API design too! Ideally there is conjunct purpose! And perhaps shared mechanism!

      I update my website multiple times a day. I want to have as much decoupling as possible. Everytime I update internal API, I dont want to think of having to also update this WebMCP config.

      Basically I have to put in work setting up WebMCP, so that Google can have a better agent that disintermediates my site.

      > Trying to keep users from seeing what data they want is, generally, not something I favor.

      This is literally the whole cat and mouse game of scraping and web automation, sites clearly want to protect their moat and differentiators. LinkedIn/X/Google literally sue people for scraping, I don't think they themselves are going to package all this data as a WebMCP endpoint for easy scraping.

      Regardless of your preferences/ideals, the ecosystem is not going to change overnight due to hype about agents.

      > Your site running its own agent is going to take a lot of resources

      A lot of sites already expose chatbots, its trivial to rate limit and captcha on abuse detection

    • candiddevmike 3 hours ago

      But we have OpenAPI at home

      • jauntywundrkind 36 minutes ago

        OpenAPI is a replacement for web browsing. Mostly for businesses. WebMCP nicely supplements your web browsing.

  • lloydatkinson 4 hours ago

    Sadly I do see this slop taking off purely because something something AI, investors, shareholders, hype. I mean even the Chrome devtools now push AI in my face at least once a week, so the slop has saturated all the layers.

    They don't give a fuck about accessibility unless it results in fines. Otherwise it's totally invisible to them. AI on the other hand is everywhere at the moment.

segmondy 2 hours ago

Don't trust Google, will they send the data to their servers to "improve the service"?

jauntywundrkind 4 hours ago

I actually think webmcp is incredibly smart & good (giving users agency over what's happening on the page is a giant leap forward for users vs exposing APIs).

But this post frustrates the hell out of me. There's no code! An incredibly brief barely technical run-down of declarative vs imperative is the bulk of the "technical" content. No follow up links even!

I find this developer.chrome.com post to be broadly insulting. It has no on-ramps for developers.

whywhywhywhy 5 hours ago

>Users could more easily get the exact flights they want

Can we stop pretending this is an issue anyone has ever had.

  • notnullorvoid 26 minutes ago

    I'm more bothered by pretending WebMCP will actually help. More than likely we'll end up seeing dark patterns emerge like sites steering the AI to book more expensive flights and hotels from ad placement.

  • thayne 4 hours ago

    Well I have had the problem of "I want to find the cheapest flight that leaves during this range of dates, and returns during this range of dates, but isn't early in the morning or late at night, and includes additional fees for the luggage I need in the price comparison" and current search tools can't do that very well. I'm not very optimistic WebMCP would solve that though.

    • trollbridge 4 hours ago

      matrix.ita does this very well, and has been doing so for nearly 3 decades.

      • ekjhgkejhgk 3 hours ago

        Do you mean this website? https://matrix.itasoftware.com

        I dind't know about it, just checked it out for a flight I'll buy soon, and has almost no direct flights which I know exist because they're on skyscanner...

        • trollbridge 2 hours ago

          In particular, you can come up with fairly complex search expressions in the "routing". In the early days the site was implemented using Lisp.

    • kgwxd 2 hours ago

      That's what everyone wants, and if everyone can easily find it, it'll be worse than getting tickets for Taylor Swift.

  • qwertox 5 hours ago

    I want my local dm shop to offer me their product info as copyable markdown, ingredient list, and other health related information. This could be a way to automate it.

    • arcanemachiner 4 hours ago

      Since you didn't say what a "dm shop" is, I'll assume you mean "dungeon master shop" where you buy Dungeons and Dragons-y stuff.

      Or maybe it's a "direct marketing shop", where you bring flyers to be delivered into people's mail? Yeah, that must be it.

      • Sophira 4 hours ago

        Given that it's about food or medicine somehow, because of the mention of ingredients lists and health-related information, it's probably https://en.wikipedia.org/wiki/Dm-drogerie_markt (usually abbreviated "dm").

        (I didn't know about that either before now.)

    • echoangle 4 hours ago

      Why would you want that over a proper API with structured data?

      • adithyassekhar an hour ago

        Welcome to a new generation of developers (not by age) who wants unstructured word slop markdown instead of clear jsons. People's brain are turned to a mush because they no longer think in a logical way, that's the LLM's job.

  • fdgg 2 hours ago

    Haha.

    Im still waiting for someone to show me something that makes me go "Wow!".

    Show me, dont tell me!

jgalt212 3 hours ago

Between Zero Click Internet (AI Summaries) + WebMCP (Dead Internet) why should content producers produce anything that's not behind a paywall the days?