jesprenj 1 year ago

Regarding OCSP, off-topic for Apple: Firefox enables OCSP by default. This means that for every TLS connection an OCSP plaintext HTTP request will be made to the certificate authority that signed the certificate of the website the browser is connecting to. This means that the certificate authority receives very well timestamped information about the exact domains you are visiting if you are using Firefox and don't disable OCSP and the website you're connecting to does not use OCSP stapling (most don't). Note that disabling OCSP will make Firefox unable to get certificate revocation information (maybe it still uses system's revocation store, I'm not sure about that, but it certainly does not use more privacy preserving CRLs).

  • lol768 1 year ago

    It's a trade-off though, isn't it? Everyone seems upset about OCSP as of late, but the domains being sent in plaintext is not exactly limited to just OCSP - we've had this with SNI and DNS.

    The advantages of OCSP are that you get a real-time understanding of the status of a certificate and you're not needing to download large CRLs which are stale very quickly. If you set security.OCSP.require appropriately, you don't have any risk of the browser failing open, either.

    It seems to me like the people who most dislike OCSP are CAs who have to maintain the infrastructure capable of responding to queries. I have really limited sympathy, that should be part of running a CA.

    The privacy concerns could be solved by mandating OCSP stapling, and you could then operate the OCSP responders purely for web-servers and folks doing research.

    Unfortunately the ship has sailed with ballot SC63 now, and we are where we are. I don't necessarily agree that OCSP as a concept was unfixable, though.

    • jesprenj 1 year ago

      > Everyone seems upset about OCSP as of late, but the domains being sent in plaintext is not exactly limited to just OCSP - we've had this with SNI and DNS.

      My main privacy related concern isn't about domains being sent in plaintext but that they are sent to the CA and they can then theoretically do analytics on this data and profile web users.

      But maybe this concern doesn't really make sense as we have strict personal-data regulations now.

katzinsky 1 year ago

This kind of stuff is a major reason I completely cut all Apple stuff out of my life.

When you're network is all Linux everything actually does "just work" more so than it ever did with OSX. Everything is just an SSH away, it's really pretty amazing.

  • gjsman-1000 1 year ago

    SSH is built in to macOS and can be enabled in a few clicks.

    https://support.apple.com/lt-lt/guide/mac-help/mchlp1066/mac

    • boffinAudio 1 year ago

      ... except the ssh bin is also tracked with OCSP ..

      • kjkjadksj 1 year ago

        Can’t you disable it though or block the connection?

        • katzinsky 1 year ago

          But you don't have to on Linux.

          Like there's just so much crap you have to do to make these non-free OSes pleasant or private and it's always blowing up. Linux mostly "just works" OOTB.

        • talldayo 1 year ago

          No developer should be rationally expected to do that. SSH doesn't ship with telemetry or tracking, you shouldn't have to modify the default to make the software behave the way it was intended to; this is runtime-level coercion. You either consider it a feature, or you see it as a bug.

          Plus, if you know what OCSP is and care about it, chances are you don't use MacOS anymore. Nobody who conscientiously objects to OCSP tracking should be assenting to the rest of MacOS and it's perverse sense of "security".

    • ProllyInfamous 1 year ago

      Be careful doing this without enabling your own key/certificate validation. We deployed SSH to a few dozen OS_X machines in a lab (for backend maintenance) without certificates for handshake (i.e. password-based) and next morning several machines had been compromised.

      We were using complex passwords; this was when 10.6 was latest OS, so perhaps security is better now with simple password-based SSH — but I would never take such a risk again.

      • fsckboy 1 year ago

        I find it hard to believe that your complex passwords got cracked overnight. Were your machines on the internet and do you have huge bandwidth, not behind a firewall? Inside job perhaps with access to /etc/shadow with passwords shared across machines or something? perhaps user accounts got compromised by other users? that's a phenomenal number of login attempts across the internet.

cyberpunk 1 year ago

How does one go about creating a little snitch rule to prevent these connections?

  • frankjr 1 year ago

    Paste the following on the All Rules view [link redacted]

    (Posting a link because HN formatting breaks the format).

    • lapcat 1 year ago

      That's obsolete. macOS now uses ocsp2.apple.com.

      The process is "trustd".

      • frankjr 1 year ago

        Thanks. Does that depend on macOS version? Better to block both I guess.

        • lapcat 1 year ago

          > Does that depend on macOS version?

          Yes, though I couldn't say offhand when exactly it changed.

        • phi0 1 year ago

          I've checked my DNS logs and there hasn't been a single hit against ocsp.apple.com over the last year, but around 20-30 hits for ocsp2.apple.com per day per device. (iphone, macMini, macbook)

          Just blocking ocsp2.apple.com is probably fine if you're running anything recent-ish.

          • JBiserkov 1 year ago

            Block ocsp3.apple.com while you're at it.

            And ocsp4. And 5. Block them all!

          • Wowfunhappy 1 year ago

            Does Little Snitch support regex? Perhaps it should be `ocsp\d*\.apple\.com`

  • FireBeyond 1 year ago

    Well, let's not forget the little dance Apple did to make their programs and the system be able to bypass packet filters like Little Snitch...

    ... and then claim it was necessary for "updates and upgrades". Not sure why TextEdit.app needed a kernel network extension for "updates and upgrades".

    They also denied it was possible until it was provably demonstrated that they did.

    I like Apple stuff, everything I use is Apple. But too many see them as infallible or go to the whataboutism for missteps like these.

    • flemhans 1 year ago

      Oddly enough, UTM VMs also bypass Little Snitch, making me wonder if you could always just bypass LS by running a VM in your evil app.

      • wpm 1 year ago

        Will it bypass it regardless of the virtualization engine, or only if you're using the Hypervisor framework for macOS VMs?

        • frankjr 1 year ago

          Depends on the network adapter in use. E.g. a bridged adapter will bypass it, an emulated vlan won't.

          • lloeki 1 year ago

            Ah that clarifies things, I was confused for a moment.

            Well, the goal of a bridged net adapter for VMs is to make the VM as if it were physically plugged in directly into the network, independently of the host, so it makes sense for it not to be affected by a firewall.

            > running a VM in your evil app.

            IIRC back then, creating a bridged net adapter required a special entitlement (special as in you can't get it without explicitly asking Apple for it and the dev cert for it has additional entries, like for kernel extensions). Dunno if that's still the case.

  • ProllyInfamous 1 year ago

    Little Snitch WAS NEVER A GOOD PRODUCT if you're worried about DNS-leaks.

    Fact: Little Snitch resolves the IP address (i.e. does domain-to-IP lookup) before the Deny/Allow dialogue ever appears onscreen. Only by pressing "Allow" does Little Snitch then allow/initiate connections to the already-resolved IP ADDRESS.

    • maeil 1 year ago

      Then do you have a suggested alternative?

      • ProllyInfamous 1 year ago

        /r/PiHole

        I run four, locally [50+ users].

        Local DNS Server issues default lookup server @x.x.x.2 (e.g. minimal blocklist PageAd, CloudFare), for maximal client compatibility.

        Users can thereafter upgrade levels of blocking by manually increasing DNS IP +1 (e.g. x.x.x.3 for stricter blocking, whereas default x.2 only blocks seven very common trackers), at host-level.

        They can also manually enter x.x.x.1 and thereafter have no DNS-restrictions [this is the router itself, which each x.x.x.N itself resolves to, via our ISP's own DNS serverlist].

        Our x.x.x.5 is top-notch white-page browsing, but many applications won't work (e.g. banking; although surprisingly Youtube still works although you do have to watch pre-roll ads).

        I have one user who there-after has his own local subnet/PiHole, although you can hop-up to PiHoles, as long as the subnet is a child or sibling of that DNS-resolver.

hansvm 1 year ago

Can you get around that nonsense by turning off wireless radios before launching apps?

  • wkat4242 1 year ago

    Yes you can. If there's no network it will skip the check.

    It's not very practical though. Better to block the address with little snitch or a hosts file

    • lightedman 1 year ago

      HOSTS hasn't worked very well for a long time, the OS often ignores it.

      • wkat4242 1 year ago

        Oh ok thanks! I didn't know that, I've used it for other stuff but mainly third party software. I've never tried to block this.

      • latexr 1 year ago

        Do you have a source? I regularly use /etc/hosts and never saw any inkling of it being ignored, but do see plenty of cases that confirm it is not.

        • shakna 1 year ago

          Safari, when using the "iCloud Private Relay", does ignore the hosts file. Not obvious to everyone, but does kinda make sense.

          • latexr 1 year ago

            I just tested this, and it’s even worse than your description. Safari ignores /etc/hosts even when Private Relay is off.

            • jshier 1 year ago

              I think it's the "first to connect" logic between IPv4 and IPv6. Saw an article recently that said when the address being connected to has both addresses, hosts doesn't work unless you map both there, otherwise the unmapped address will connect.

              • lightedman 1 year ago

                That doesn't explain this happening on systems with 2009 Windows 7-era hardware which is only IPv4 capable, like one of my systems - I've noticed HOSTs being ignored for many, many years.

        • lightedman 1 year ago

          I totally blocked the internet in HOSTs. Blackholed everything to 127.0.0.1

          Guess what I'm doing right now? Talking to you on HN.

boffinAudio 1 year ago

I feel the same way about this as I do with the whole NSA clustfuck: If I had access to my own data and could do what I wanted with it, I'd be fine with it.

minkles 1 year ago

Ok I understand the technical considerations here. But really what is the risk surface for me here as a dumb end user who uses apps from the store and a few things off homebrew and not a lot else? I mean I've got a large pile of Apple crap sitting here. Is this even remotely worrisome enough to shift it and move to something else? The CSAM thing probably was. This? I don't know.

(I could probably do everything I need to do on Linux - I just don't want to)

  • odo1242 1 year ago

    tl;dr: Apple can see what apps (or at least what app developers' apps) are installed by a specific IP address through OCSP. It used to be unencrypted and therefore visible to the internet, it is now only visible to Apple. Apple said they don't keep logs, but theoretically they could if they wanted to.

    • chatmasta 1 year ago

      IIRC this request happened every time you open the app, not just at install time. So Apple has a log of (IP, AppID, TimeOpened).

saagarjha 1 year ago

It's really quite unfortunate how much of Apple software is designed around "privacy is when you trust Apple" :/

  • flarex 1 year ago

    Not really they are moving into homomorphic encryption where the entire query and processing is encrypted and Apple has no knowledge of the what you actually requested.

    • walterbell 1 year ago

      Following through on a public privacy promise does not require R&D.

    • TimSchumann 1 year ago

      I was unaware there exists a fully homomorphic encryption scheme that has the right trade offs between security and computational effort to make this economically viable for even moderate to small workloads.

      I’ve always thought it was either far too time or far too space intensive to be practical.

      Do you have sources on this, either from Apple or academic papers of the scheme they’re planning on using?

      • timenova 1 year ago

        They posted about this recently [0][1]. They are using Homomorphic Encryption in iOS 18 for Live Caller ID Lookups.

        [0] https://www.swift.org/blog/announcing-swift-homomorphic-encr...

        [1] https://news.ycombinator.com/item?id=41111129

        • rpdillon 1 year ago

          I've posted about this above a little after you did. Reading the article, I'm unable to determine whether or not this has any practical utility outside of niche applications or if it has the potential to be broadly useful. Has anyone reviewed the SDK that can render an opinion?

          • flarex 1 year ago

            Homomorphic encryption is broadly useful and in fact should be ubiquitous for remote computation that leaks private data (not to comment specifically on Apple's implementation). They did open source it though, which gives you an idea that they want others to follow.

            • saagarjha 1 year ago

              Can you point to other ways this is used or is intended to be used?

      • j2kun 1 year ago

        It's useful for situations that would otherwise be illegal, so that tradeoffs are less relevant.

    • rpdillon 1 year ago

      Completely unclear how much they're moving into homomorphic encryption. The only resource I'm available to find about it is an announcement from 30 July saying that they can now do caller ID lookup using homomorphic encryption and they've announced an SDK that developers can use to leverage it. But the announcement is so vague that it's entirely unclear how much this can actually be used for practical workloads. And, the idea that they're going to go all in on homomorphic encryption is speculative based on what Apple has revealed so far.

      That's notable, as we're discussing a case where Apple said they would do something, and then not only didn't do it, but went out of their way to pretend that they never said they would.

      • flarex 1 year ago

        I'm not aware of any other company of Apple's size (or anywhere approaching) that have been as committed to privacy tech. Of course they are not perfect and sometimes get it wrong but they constantly release new technologies that are furthering our privacy. Who else does it better?

        • 7jjjjjjj 1 year ago

          These companies shouldn't be graded on a curve. Everyone knows Microsoft is crap for privacy. But Apple has their reality distortion field, and it's important to show people that their privacy promises are BS.

          • flarex 1 year ago

            Okay but from an evolutionary sense which company should we be supporting. The one company that is somewhat moving towards privacy or the 10 others that don't give a shit. Which one should survive. Would you like to see companies that copy Apple's privacy approach or Facebook's dumb fucks approach.

        • makeitdouble 1 year ago

          It comes down to what you identify as privacy. Apple is commited to not give your data to any other company and keep it protected in their ecosystem. They'll sell access to you for ads, but only exposing your cohort to the advertiser.

          From that lens, Google is also commited to never give your personal data (think Gmail content, Maps behaviour, pins etc) to other companies and keep it all in their ecosystem, for themselves only. Your data is their key advantage, the base of the ad empire, and they won't let another company run away with it.

          If we call Apple privacy focused, Google also fits the bill, the question just falls down on whether we see Apple or Google as part of our intimate circle, within our private life. I assume you do for Apple but not for Google.

          • flarex 1 year ago

            There is no serious person that could think that Google is a privacy focused company. Their entire business is founded on knowing everything about their users. It's an ad company. They need user data to function and they will never release tech that compromises their business. Just look at the direction of ad blocking and chrome to see where they are headed.

            • makeitdouble 1 year ago

              The Apple side is the similar: their current entire business is to middle man your relationship with other companies. You buy Apple products, purchase and subscribe to apps and services from the App store, use Apple Cloud, etc.

              They need you in their ecosystem, the same way Google needs you in theirs.

              And I totally agree with you, I wouldn't't call Google privacy focused, and I don't call Apple privacy focused either, even as they market it harder than anyone else.

              • flarex 1 year ago

                Google is a privacy antagonist. Apple is privacy focused because it suits their business. Apple has been privacy focused for years and has built several technologies to prove it. It's not hollow marketing to build privacy software.

                • talldayo 1 year ago

                  Google is a "privacy antagonist" with an Open Source OS you can build locally and modify to your heart's content? And Apple's been privacy focused, suing security researchers for copyright violation when they try to analyze iOS?

                  Methinks you're holding a double standard. Compared to Android and Linux, Apple's "promise" is no better than the one Microsoft offers Bitlocker customers.

                • makeitdouble 1 year ago

                  I don't define "privacy" as "only a single company has access to all my stuff", so to me Apple's claims are just marketing. I'd buy an argument about good security and some protection against other companies, just not "privacy".

      • csande17 1 year ago

        Also, the homomorphic encryption is a requirement for third-party caller ID providers, not Apple themselves. Apple's first-party "Contact Photos" caller ID feature operates primarily on the "trust Apple" security model AFAIK.

  • api 1 year ago

    Some of it is self-serving and some is explainable by the deep and pervasive tension between (security / privacy / autonomy), and usability.

    • saagarjha 1 year ago

      I generally do not impute malice to it, but it is in many cases the lazy option out.

  • gilgoomesh 1 year ago

    Ultimately, whoever controls operating system updates always has control over your privacy. Even if Apple did offer perfect privacy, there's no reason an update couldn't completely change that.

    Bluntly: if you don't trust your OS vendor, then you can't use OS updates. There are people in this category but it's a lot of work.

    Much easier to trust your OS vendor (at least to this extent).

    • fsflover 1 year ago

      > Ultimately, whoever controls operating system updates always has control over your privacy

      This is not true. FLOSS (and reproducible builds) allows the community to verify the code and significantly (though not fully) decrease the trust to the OS.

  • cageface 1 year ago

    Exactly. What does privacy even mean when your entire digital existence is owned by and visible to a single entity?

_twor 1 year ago

What are Apple up to? Very long term?

  • slackfan 1 year ago

    Selling that data.

    • IndySun 1 year ago

      >Selling that data.

      To whom? And to what aim? They're not exactly short of money.

      • fsflover 1 year ago

        To the highest bidder apparently.

        > They're not exactly short of money.

        Shareholders are not satisfied (and never will).

Spivak 1 year ago

Honestly I can imagine the preference being axed when OSCP is the macOS antivirus and I'm pretty sure I know what the first thing any malicious software is gonna do if it's able to be turned off.

macOS preferences aren't magically locked away from the rest of system, regular users can change their own user preferences, and root can change system preferences. An antivirus has to still work against an attacker who has root. It's why you can't block certain apps/domains from the firewall as well.

You could put the preference in recovery mode along with disabling SIP and I think that would accomplish everyone's goals.

  • lapcat 1 year ago

    > OSCP is the macOS antivirus

    It's not. There are multiple layers of security, including notarization and XProtect.

    > I'm pretty sure I know what the first thing any malicious software is gonna do.

    What?

    > macOS preferences aren't magically locked away from the rest of system, regular users can change their own user preferences, and root can change system preferences. An antivirus has to still work against an attacker who has root.

    You sound confused about admin vs. root. Anyway, if you have a local attacker running on your Mac, then it's already too late for OCSP.

    • Spivak 1 year ago

      Those aren't so much layers as the different parts, and OSCP is how notarization actually provides security, the developer cert is what's being checked to run the app.

      This scheme isn't really designed to prevent attacks so to speak, it's to stop the malicious software on everyone's system all at once. You can say theoretically it's too late because the software is already doing what it's doing and you should consider the machine compromised. But the rubber meets the road for regular users who aren't going to wipe their machines in response to malware and so this lets you purge it.

      I don't think I'm confused about admin vs root. I'm talking about the System Administrator user, The oops I ran malware with sudo user, uid 0, the user who is only constrained by the kernel via SIP. But yeah, admin is the more common use-case for regular users and I'm not sure the point you're making, malware can get admin and if you put the preference somewhere changeable by admin then welp. You could put the preference in recovery mode same as SIP, I think that would be fine.

      • lapcat 1 year ago

        > Those aren't so much layers as the different parts

        Mkay.

        > OSCP is how notarization actually works, that's what's being checked to validate the notarization.

        No, you are misinformed. OCSP is checked by the trustd process on ocsp2.apple.com, whereas notarization is checked by the syspolicyd process on api.apple-cloudkit.com.

        OCSP is simply checking whether the Developer ID certificate has been revoked. Notarization, on the other hand, requires uploading a build to Apple and receiving a special notarization ticket. The notarization ticket is either "stapled" to the app or downloaded from Apple when the app is first launched.

        • Spivak 1 year ago

          > Mkay

          Well they're not. What would you call it? Windows Defender and Microsoft's code signing requirement aren't super related. You could purge discovered malware with a signature/scan but that's not impossible to get around.

          I'm not sure I really grok the difference from a security perspective when the main thing with notarization is ensuring it's signed with your developer cert.

          I guess the Venn diagram isn't technically a circle but is it not that the actual security of notarization is provided by OSCP? I suppose I could have phrased that bit better.

          Is there a case where a hypothetical notarization process that excludes that bit provides any real security? Because Apple "scanning it for malware" isn't going to be that different from Xprotect.

          I'm really not sure what I did to get such an, idk hostile? response.

          • lapcat 1 year ago

            > the main thing with notarization is ensuring it's signed with your developer cert.

            It's not the main thing.

            > is it not that the actual security of notarization is provided by OSCP?

            No.

            I tried to explain the difference in my previous reply, but I'm not going to sit here and write an entire essay on the subject (though I could). The information is out there, for example on developer.apple.com. Or even on my own website. Inform yourself, or at least stop spouting falsehoods.

            • sroussey 1 year ago

              Oh god, don’t send people to developer.apple.com. It’s apple’s worst product.

              I’d rather you shill your own blog posts. Even without reading them, I know they are better. ;)

          • Vegenoid 1 year ago

            > is it not that the actual security of notarization is provided by OSCP?

            The security of notarization is provided by Apple's signature over the hashes of the executables in the app [0]. The hashes and signature are put into a "ticket". This ticket is stored on Apple's servers, and can also be "stapled" to the app. Gatekeeper (one of the macOS security systems) will prefer to fetch the ticket from Apple if possible, and fall back to the stapled ticket if available. Notarization is meant to guarantee that the code was sent to Apple and checked for malicious code.

            OCSP checks that the Apple Developer ID certificate used to sign the app hasn't been revoked.

            They are two separate checks done by the Gatekeeper system, which is meant to ensure that only trusted software runs on macOS. I believe it makes sense to call the OCSP check part of the Gatekeeper system, but this may be incorrect.

            [0]: https://forums.developer.apple.com/forums/thread/710738