In a laptop, I care far more about battery life and the fan noise. However, Intel 12th gen is only able to pull these impressive performance feats due to very high turbo boost clocks.
It's not that Intel is bad, it's still pretty great but in the age of M1 I wish Intel released a processor with similar power consumption.
The AMD offerings are much closer to M1 although they're not yet on TSMC 5nm so they don't quite match it. It's a historic event. Intel was #1 in laptops for an eternity. Even when the Pentium 4 was being outclassed by AMD Intel still kept the crown in mobile. This time around they first got overtaken by AMD and then by Apple and are now #3. Puts into perspective how huge of a miss their process node fail was.
Let’s remember AMD was absolute garbage for like 10 years until around 2016, when they finally bounced back. And they were getting closer and closer to bankruptcy, having to flog the silver - their fabs and even their campus. I’d say the Zen design and tsmc saved them (or Lisa Su and Keller).
Intel is in a better position financially than AMD was, and could catch up again. I tend to think it depends a lot on the organization and the people.
IC: A few people consider you 'The Father of Zen', do you think you’d scribe to that position? Or should that go to somebody else?
JK: Perhaps one of the uncles. There were a lot of really great people on Zen. There was a methodology team that was worldwide, the SoC team was partly in Austin and partly in India, the floating-point cache was done in Colorado, the core execution front end was in Austin, the Arm front end was in Sunnyvale, and we had good technical leaders. I was in daily communication for a while with Suzanne Plummer and Steve Hale, who kind of built the front end of the Zen core, and the Colorado team. It was really good people. Mike Clark's a great architect, so we had a lot of fun, and success. Success has a lot of authors - failure has one. So that was a success. Then some teams stepped up - we moved Excavator to the Boston team, where they took over finishing the design and the physical stuff, Harry Fair and his guys did a great job on that. So there were some fairly stressful organizational changes that we did, going through that. The team all came together, so I think there was a lot of camaraderie in it. So I won't claim to be the ‘father’ - I was brought in, you know, as the instigator and the chief nudge, but part architect part transformational leader. That was fun.
I don’t know the truth but that was spoken like a true humble leader. If I was forced to draw conclusions just based on that one exchange, I would agree that Jim Keller was an instrumental part of the successful design and implementation of the zen microarchitecture.
The whole interview was deeply inspiring. There is a very great deal to be learned from studying it. Listen carefully, and more than once! The audio is probably better than the text, at least first time through.
That and Microsoft has had an exclusivity agreement with Qualcomm for Windows on ARM. Once that agreement expires, I expect AMD to suddenly have some very interesting offerings.
After running benchmarks on Intel and AMD CPUs from that era with mitigations, it's clear Intel never really had a performance lead due to design. That alone rewrites the entire "state of AMD" narrative.
If Intel only had a competitive advantage during that era because they traded performance for security, was there ever really an advantage?
I personally don't think Gelsinger is going to be able to fix their culture issues. He's been more intent on pumping stock and asking for handouts than innovating, which is part of what got them where they are in the first place, for good or ill.
> they traded performance for security, was there ever really an advantage
Security is important, I would always prefer a secure product. But given design habits that elide security for performance, as well as a compromised supply chain, the only choice we have is to side with the devil(s) we know.
>Let’s remember AMD was absolute garbage for like 10 years until around 2016
After being ahead of Intel in the Pentium 4 era. The problem back then was marketing and giant amount of (probably illegal now) bundling that was rampant back then.
That last part is important. While Intel had good fab engineers, they also relied on cutthroat business deals requiring exclusivity to get discounts and promotional support and there were constant shenanigans with things like compiler support.
AMD had to make a much better product to get manufacturers to consider it, not just being price competitive. It took the P4 train wreck to get the market to shift but Intel had enough lock-in to make it to the Core generation without losing too much market share because so many of the vendors had those contracts.
And, to be clear, even though P4 was a disaster, Intel was STILL the market leader everywhere. They responded with the Core line only after years of AMD eating their lunch with the Athlon, Athlon 64, and Athlon XP line.
Thunderbird was released in 1999 and from there to the Core 2 Duo release in 2006, AMD was the performance leader (certainly at a huge discount compared to intel offerings).
> AMD was absolute garbage for like 10 years until around 2016
Their top performance was lower than Intel in many cases, but it was certainly not "absolute garbage". For low- to mid-range they offered similar performance with usually a (much) lower price point. For normal "average person" usage they were often the better choice in terms of "bang for your buck".
The main reason we were selling Intels is because we got a rebate from Intel for every CPU we sold. Financially it didn't really matter if we sold a €400 computer, €500, or €600 computer: the profit margin for us was roughly identical (a little bit more, but not much), but with Intel you got a rebate back-channelled, so then it did matter.
Well okay. But they had the same problem Intel is in now: they pushed an inefficient power hungry chip to the limit. So it was cheaper, and performance was only somewhat lower, but it was much hotter and used a lot of power —- so pretty useless for laptops.
On laptops AMD was indeed not very good; but their desktop CPUs – far more important in the 00s and early 10s than they are today – were pretty decent. That is not to say that Intel also didn't have good CPUs at the time, but (much) more expensive and for people just doing their browsing and occasional gaming on their desktop AMD was a solid (and cheaper!) choice IMO.
Intel is in an even better position now than last month, being handed $billions ($tens of billions?) of US tax dollars to fund building new fabs, which they would have had to build anyway. That will free up money to use in undercutting chip pricing, helping to bury AMD again.
Intel should realize at this point that their existential threat comes from TSMC, not AMD. AMD is one competitor, but TSMC is giving a huge boost to all of their competitors, particularly the ARM vendors who won’t just take some x86 market share but potentially destroy the entire x86 market at some point in the future.
The CPU is great. It’s power envelope and performance feel amazing.
My biggest complaint is the there is a very limited selection of Laptops with AMD’s chips, especially the top tier. The one I bought required me to replace the Wifi/Bluetooth card (it came with a barely “supported” MediaTek one) for Linux.
I have loved Linux for a couple decades. It was a difficult decision to migrate to the M1. Linux is so much more responsive and pliable. However Asus encouraged me. After owning three of their laptops in the past 18 months that began to fall apart anywhere from 3-12 months-I was done.
Unfortunately most 5/6900hs laptops are not flagship quality. Typically they are lower-mid range gaming units.
This is my first mac. The build quality is exceptional. I will just say I was expecting a better experience from an OS built by a trillion dollar company.
Similar position. The existence of Asahi Linux convinced me to buy my first Mac, since it increased the odds that my M1 Air would be usable (to me) long-term.
I haven't made the switch yet as my old Lenovo is still hanging on. Would you mind expanding on your gripes with MacOS, in particular the comment about responsiveness?
For me it is the little things. Like switching windows. Mac feels half a second slower then Linux and that delay make it feel less responsive. Opening up the terminal can iterm2 is another one. For me these things just adds up. I want my OS to get out of my way when I want to get work done.
I have turned off the genie effect and perhaps other animations, but my Ubuntu on my Lenovo still feels faster than the Mac.
Lenovo's is the only one I find with 16" 3840x2400 OLED, but just a 6850, $2800. Or with 6950 but just 1920x1200, $2635. None with both. Advantage, Radeon R6500M GPU. Doesn't say which wifi it has.
Zephyrus G14. I certainly wouldn’t swear off ASUS over it. After you upgrade the Wifi card (and, optionally, the SSD), it’s a great laptop; especially for Linux.
Me too. I just gave up and bought an ASUS H5600QM with a 5900HX (and 3840x2400 OLED, 32G socketed RAM, 2 NVMe sockets, and 3 physical touchpad buttons). If you act fast, you can still buy the version with Windows 10 and a GTX 3070 at $1999, instead of Windows 11 and a GTX 3060 for $254 more.
Build quality is excellent, but the first one died after 2 days: charge light wouldn't even go on. Waiting for #2. Wish me luck!
Its really interesting with Intel on mobile platforms during the P4 era. Their P4 mobile chips were often terrible for temps and power use, but their Centrino/Pentium M platform was excellent despite being based on the PIII. The P4's architecture became a dead end, while the Pentium M's essentially continued to be improved, scaled up, and became the Core series.
If they had tried to force the P4 mobiles instead of rethinking and retooling the PIII core with some of the P4's FSB innovations and other things, they probably wouldn't of had as competitive mobile processors and maybe wouldn't have dethroned AMD a few years later.
My first laptop had one of those P4-based CPUs. They were super terrible chips. I don't think they were really even mobile chips. I think my battery was good for about 25-30 minutes tops. And Dell put in a soft cap of 768MB of memory. I was pretty pissed that none of this was ever noted when I bought the laptop.
The AMD offerings are still very far from the M1. Try comparing the 6800U to the now 2-year old M1 in Geekbench. The M2 widens the gap even more.
The Ryzen chips are already clocking over 3 GHz, so there isn't much more scaling from power left. That's why 5nm Zen4 probably won't move the needle too much.
From your link: "Even under demanding multi-threaded workloads, the M2 MacBook Air was not nearly as warm as the other laptops tested"
Keep in mind the 6850U you're comparing against has a fan too. Notebookcheck says it has a 40W turbo on the gen 2 model, and the gen 3 Intel version has a 28W sustained boost. An M2 uses around 20W.
Also, a lot of those benchmarks are showing all x86 CPUs with a 3x+ lead, which indicates software optimization problems. Geekbench 5 is optimized for both arm64 and x86 and shows the M2 ahead of the 6800U. There are also just broken results like the LC0 test where the M1 is substantially faster than the M2. Overall, your results don't seem very valid.
> From your link: "Even under demanding multi-threaded workloads, the M2 MacBook Air was not nearly as warm as the other laptops tested"
> Overall, your results don't seem very valid.
They're not "my" results, and thermals were not part of the discussion.
Regarding performance, if one wants to be rigorous (and that's why I didn't state "68xx is faster than Mx"), one can't make absolute claims either way, as Torvalds himself has criticized Geekbench.
I still think that describing the performance difference as "not very far" is a valid assessment.
During the pentium4 era me and my friends all got mac laptops running powerpc. Our laptops could actually go on our laps without burning us and the powerbook versions were fast enough to emulate windows in a VM without feeling slow at all. My battery lasted quite a while for the time.
I agree, but than you have corporate compliance installing an insane amount of Spyware on your device so that a 2021 XPS 13 Dell with a gen 10 Intel is out of battery when only using word within the hour.
Fans spinning all the time and no way of disabling this madness.
Nearly the same for my Mac colleagues. They maybe get 2 hours out of their M1 MBPs unplugged.
Recently discovered that Infosec has now mandated that all laptops have sleep disabled (the reason given is to be able to do endpoint patching). ¯\_(ツ)_/¯
So I'm faced with having to log out and power off every day or suck up the cost of keeping the laptop on. /r/maliciouscompliance, here I come I suppose.
Yeah, at a previous company, the IT team were complaining about users disabling powernap or shutting down laptops as it meant they wouldn't patch overnight.
But these laptops were in bags, the last thing I wanted was a mac to start a house fire by heating up in a bag, so fuck that.
Probably because with work at home there’s no WOL.
We disabled sleep to improve battery life and address a personal safety issue. There are many failure scenarios between various builds of Windows 10 and shitty drivers. Devices often wake up in bags and spin up fans until dead.
Normal scenario is that the battery is dead in the morning. In one case, it melted the compartment material in a bag and was deemed a safety hazard.
So now instead all devices can keep their fans on and melt compartment material bag as nobody would assume the device is that dumb to not sleep when closed. Much better.
>So I'm faced with having to log out and power off every day
When did this become a chore? I do this as standard and imo all people should. SSD Boot times these days are mostly under a minute and you get a fresh desktop every time.
EDIT: In fact I am one of the sysadmins which so many seem to despise on here. I have hybrid sleep and fast startup disabled by group policy on all my windows domain networks. It reduces so many support tickets by just actually having machines properly rebooted every night. Without this, people contact IT for glitches and bugs in their system, and when you look at the system uptime it is years!
I don't care about boot times but I do care about keeping state.
In addition to the obvious browser I typically have an IDE, a few terminal windows, email, a few communication clients (due to internal and customer choices, not mine), and usually one or two database clients open at most times. Some of those restore their state nicely while others don't. Some of the terminal windows have e.g. a particular set of terminals at certain directories because I need to keep track of and work on multiple source repos at the same time.
Starting all of those from scratch every day would indeed be a chore. Perhaps more importantly, having my desktop be the way I left it also automatically reminds me what I was doing and helps continue from where I was.
A fresh desktop every morning would be a misfeature for me and would annoy and frustrate me immensely if forced on me.
I do of course reboot now and then for system updates etc., and I don't mind that.
There might be a decent rationale for forcing a reboot or disabling sleep on non-tech office devices if the staff are technically unskilled, but this is HN, so it's not much of a surprise if people aren't keen.
Seconded. My working state is crucial and keeps me on track. For someone with ADD, needing to overcome the inertia of setting up all my apps just the way I left them every single day would be catastrophic for my productivity. I refuse to reboot more than once a month, and only then if an OS update requires it.
The year is $CURRENT_YEAR, needing to reboot a system should be regarded as a last resort and a relic of the past. No matter how much effort you dump into making apps remember their state when they're closed, it will always be strictly inferior to just dumping RAM to disk.
Because I have visual studio open, a solution loaded and organized with all the files I need for this sprint, SSMS with the servers all connected that I need, all my tabs open to see my jira board, support tickets, etc, a notepad doc i was using to take notes on a phone call, my ssh key loaded into pageant, and if my computer reboots, i'm going to forget half of that when it starts back up and lose 30+ minutes trying to get set up again the next day.
edit: i would legitimately quit a company that made me reboot every single night.
In our network there are no developers. That would be a different usecase which would require different policies.
All tools required by staff are online and autosave everything.
This in fact is another reason why proper restarts are enforced nightly, as most browsers with multiple tabs of heavy web apps start misbehaving very quickly. Forcing a nightly shutdown of browsers is an added bonus of proper shutdowns.
Be aware I am in a corporate environment, with all non technical users, and all services online.
Ok - that makes sense. Obviously being on the dev side of things, I've never worked for a company that didn't have devs (and admins, devops, etc). Sounds like it definitely works in your case.
> I have hybrid sleep and fast startup disabled by group policy
I guess I'll have to add this to my questions to ask at job interviews. Admins like you these days sure try to make our lives as horrible as possible. Corporate anti-virus, DNS interception and policies like these turn my machine into something I constantly want to throw out the window.
All because you are too lazy to ask a person to reboot if the machine has high uptime.
Youve taken a lot of liberties and assumptions with my post here.
I dont do any of those things you mentioned, only enforce a nightly shutdown.
Nothing to do with laziness. As other people have often mentioned on HN, in the real world of understaffed underbudget corporate IT you need to do what you can to enforce things which make everybodies life easier.
It does not make everybodys lives easier, it makes your life easier by easing the support burden at the cost of making the lives of those who have to live with those policies (marginally?) harder (or more annoying). It is quite a bold claim that those of us who do not shut down our laptops every night have no reason to do so, and you know better than us that it would come with no additional cost to us to do so.
It might very well be that it is preferable to the organization as a whole to sacrifice a bit of productivity everywhere for less burden on IT. But IMHO it should not be a decision which the IT department can make in isolation.
This is the part that people get wrong about all the ITIL metrics nonsense; they’re all designed by people who don’t have a background in science or experimentation and they never account for confounding factors. For instance, companies I’ve worked for in the past actually conducted rigorous studies of improving quality of life (as opposed to “fewer tickets==good”). They discovered that the number one cause of lower ticket volumes is Shadow IT! Because of course it is.
If you are disabling things by policy, it should be after a discourse with your users and a serious attempt at training. Being a GPO dictator is an anti-pattern.
Policies such as yours are tremendously user hostile, and they are a reflection of the company's culture. I would probably not quit such a company, but I would certainly go rouge by either bringing my own equipment, or reinstalling the OS. If reprimanded, then I would quit.
I don't think this means what you think it means. I'm not a dev, but a mechanical engineer, and having to shutdown nightly, and reopen things the next morning, would cost the company at least an hour of my work time every week.
Upthread you can see why. High drama developers will tell you that logging out will cost the company $25k a year because previous snowflake has to open notepad and disrupt their flow as they eat breakfast.
The frontline IT guys aren’t able to deal with shit like that, so a draconian policy comes top down.
I treat each laptop reboot (regardless of reason) as an unexpected crash.
If the laptop crashes more than once a week, I simply won't use it. If I worked at your company, I'd just BYOD, and keep your garbage laptop in a drawer, only booting for compliancy crap (and certainly not leaving it on long enough to download updates).
I've actually done this at a previous job or two. It was fine. Both (large behemoth) companies ended up cratering, at least partially due to chasing away competent employees with boneheaded corporate policies.
I would. If there wasn't the teeny bit of SSO that means I can't access any relevant work software on a non endpoint managed machine.
Office? Gitlab? Jira? Confluence? Any code at all? Adobe Experience Cloud? Any Google Service? Adobe Creative Cloud? Our Time and Expenses Tooling?
All locked behind SSO with EPM enforced. Additionally nearly all internal resources are only accessible via VPN. And guess what - only usable on EPM devices.
When I started I received a device, SDD encrypted me being root. After being acquired by big corp we now are in compliance world. Parts of that would have come regardless of big corp due to client requirements.
Because it's not just system boot time that matters. After you do that you then have to launch half a dozen to a dozen applications that all have varying startup times, that will all randomly draw focus during their startup, and some will require you to do logins during all of that.
Irony being, one of the biggest benefits for me of the M1 power consumption is that it can run quiet and smooth even with all the corporate spyware on there. It can even run MS teams along side that and docker too which also permanently consumers 100% of one core, while still staying completely silent.
It's crazy to think you have these opposing teams of engineers, the Apple ones working away to optimise like crazy to reduce power consumption and the compliance software teams thinking up new ways to profligately use more.
Sure, but running Docker and Teams will put an M1's CPU at 70c when a similar x86 Linux box would run it at ~40c. Maybe Apple's fan curves are to blame here, but I much prefer a Linux laptop for local dev.
unfortunately it does it even if I kill all the containers. It's a well known issue and they have been circling around with attempted fixes, regressions etc for a long time [1]
2 hours on a M1 is beyond corporate garbage. It's like they try to use as much power as possible, perhaps more than one process is stuck in an infinite loop.
Out of curiosity, and aside from Teams which I'm already sadly familiar with, what software is it that you're talking about? Company I work for was just bought by some megacorp, but I'm still using my own 2019 13" MBP for now.
My company has some McAfee and Palo Alto stuff and the cyvera cloud backup software which scans every file every 4 hours but then triggers the other software because of access attempts. Some of these are suites so its 3-4 diferent things (firewall, dlp, antivirus, ...)
Fuck this company, one of my wife’s jobs is BYOD but requires GlobalProtect for their VPN. After the software has been running for a few hours, even disconnected, it just starts chewing CPU cycles and grinds her M1 MacBook to a halt.
The only way to terminate it is via `defaults write` command in the terminal. It’s basically malware.
Sometimes you can work around it and just use the native VPN or TunnelBlick. Did this at a previous corporate gig where the software they used wasn't even available for macs. Gotta be lucky though. By this I mean that usually VPN software just has a default config for some VPN protocol, and if you can find out what that is, you might be able to input the same config and credentials into the TunnelBlick or network settings
I have admin on my work Mac, but I'm not sure how to turn off the garbage. It's pretty annoying and definitely cuts down on my battery life significantly. Between that and Teams being a battery hog I might only get three hours out of what should be a whole-day machine.
Note that Cinebench R23 (the main CPU benchmark we use) runs for 10 minutes, so it’s a measure of sustained performance. Boost (PL2) is typically 30 seconds or less with a long period after that of staying at the PL1 limit.
Also note that Cinebench R23 is a terrible general purpose CPU benchmark. It uses Intel Embree engine which is hand optimized for x86. It heavily favors CPUs with many slow cores even though most people will benefit from CPUs with fewer, faster cores.
Cinebench is a great benchmark if you use Cinema4D, which I asumme 99.99% of the people buying these laptops won't use. Cinema4D is a niche of a niche.
Geekbench is far more representative of what kind of performance you can expect from a CPU.
Benchmark the hardware doing 3D rendering. Which is a pretty niche use case for most people that doesn’t correlate well with more common cpu-intensive tasks like gaming or video editing.
To clarify, Cinebench correlates poorly with gaming, office software, web browsing, and video editing. Those are what the vast majority of people buying laptops will use it for.
For people that code, it also correlates poorly with parallel code compilation.
Thanks for your input, this isn't said enough. So many CPU benchmarks aren't effective at evaluating the general use case and yet are held up as this golden standard
I agree that 12th gen can sustain good performance given decent cooling. However, the issue is still heat.
It could be fun to have a `lap benchmark`. That is can you keep a laptop on your lap while it runs Cinebench R23 for 10 minutes? With TB disabled, I can.
The wild thing is, your benchmarks with turbo boost disabled (GB5 single: 617 / multi: 5163) almost identically match the benchmarks on levono's new ARM laptop[1], the Thinkpad x13s (GB5 single: 1118 / multi: 5776). True, Windows on ARM will likely keep on being a pain for some time, but linux support seems to be coming[2], and for those of us that target Linux ARM development anyways, this is one of the first performant alternatives to post-arm Macbooks. Plus it has the killer feature: no fan, long lived battery life.
Another interesting ARM system is Nvidia's Jetson AGX Orin DevKit, which clocks in at (GB5: single: 763 / multi: 7193) [3]. That system is linux native right now, but of course isn't a laptop form factor.
> However, Intel 12th gen is only able to pull these impressive performance feats due to very high turbo boost clocks.
Don't forget 12th gen Intel CPUs have a lot more cores than 11th gen Intel CPUs. That's where the significant benchmark improvements are coming from. Unfortunately though, these often don't reflect the real world use case. Having more cores isn't going to improve the day-to-day performance very much. The new efficiency cores probably improve battery life though
Maybe the new efficiency cores improve battery life over the 11th gen Intel CPUs but they're still far from the battery life of AMD or Apple silicon CPUs
I believe Intel 12th gen is not properly tweaked yet, currently the benchmarks show that the battery performance is worse with 12th gen than 12th gen. But if you 'limit' the chip and force the usage of the efficiency cores then you should be able to beat 11th gen.
You can usually use the "Xtreme Tuning Utility" (by Intel) to scale down without disabling turbo boost. I use it to scale my ~4.8 Ghz cores to 3.2 Ghz, so they can run without fans on my small form factor PC. 3.2 Ghz x 8 is enough to run most my favorite games.
Without TB, I'm down to 8x 2.4, which is a bit sluggish.
You may get better results by leaving the clock speed settings alone and adjusting the long-term power limits to an acceptable level. That way you won't sacrifice peak performance (ie. latency, especially with only one or two cores in use) but will still get most of the power savings.
I'll echo the concern about the turboboost really helping at times. Often, you have a single-threaded process running that benefits from a single core running at a higher clock rate for a short period of time and improves the experience tremendously.
How is 12th gen compared to 10th gen, in your experience? My XPS 13 on a 10th gen i7 just becomes absolute syrup on battery, even if I put it in high performance mode.
I have an XPS 17 with 12th gen and it drains around 50% battery per hour on a zoom call.
edit: My laptop has a i7-12700H, which uses more power than the cpu in the framework laptop.
More thoughts: The laptop is pretty much always plugged in. When I'm out I carry a little 60W charger with me.
However for other reasons I wouldn't recommend the XPS 17. I have a Lenovo legion 7 at home that I'm more happy with (cheaper, better performance, lots of ports, etc)
The M1/M2 laptops are so above and beyond anything from AMD and Intel that it’s almost a joke. If Apple was really smart and had enough supply, it could drop the price down a couple of hundred dollars and completely take over the laptop market.
Price isn't the thing that keeps most people I know away from Macbooks (after all, it's the employers/business expense). I'm sure many people would like an M1 machine for the efficiency but the software is simply not ideal.
If they invested in contributing Linux drivers they'd probably be able to takeover the market for developers. Asahi is slowly improving, but it doesn't feel ready yet to be someone's sole daily machine.
Ultimately my M1 MBP sits on the shelf collecting dust, except for occasional testing. My smaller and lighter Intel laptop already gets 8 hours of battery life which is more than enough for me, and has perfect Linux support.
As someone who uses Linux daily and generally dislikes macOS, I got all my Linux tools up and running on macOS, Docker runs well and many Linux things I run on an external cluster. So I am content.
The macOS UI is bit annoying and the fact that you have to install tiny apps like to have a separate trackpad and mouse scrolling direction is not ideal, but yeah it's all dwarfed by the fact I finally can't hear any fans and performance is just as good as plugged in.
I can't remember when I last time had a laptop on a sofa without having to worry about the charger and getting laps uncomfortably hot.
I just checked on my work Macbook, this is still correct and utterly stupid behaviour. 'Natural scroll direction' is global for all pointer devices. Internal, external trackpad and mouse.
I've never used Linux daily, always macOS or osx, but all the same tooling probably day to day and the M series will be a no-brainer upgrade when it comes time. I would like some more colours that aren't super fingerprinty in the pro line though. Pretty bored of silver and grey
> The M1/M2 laptops are so above and beyond anything from AMD and Intel that it’s almost a joke.
This may have been true when the M1 was just released, but the gap both in performance and power consumption is shrinking. And that's with the competition still being one process node behind.
I assume the M1/M2 performance versions are much more expensive (very large L1/L2 caches, see Ryzen vs Ryzen X3D performance, and that only increases cheap L3 cache) than AMD processors. Apple can afford this I assume with high end laptop and their margins from owning the supply chain. Apple has the benefit of not telling and you can't buy an M1 (?). I would be interested about production costs of Ryzen vs. M1/M2 processors, any sources?
The rumored 15" MacBook Air may just do that. The base M2 chip is plenty fast and with the larger case they could put an even larger battery in and push battery life to 30+ hours.
I recently turned down a m1 mba for $1100 in favour of a 2015 mba for $500. The keyboard is so dramatically better in the 2015 model, I couldn’t believe it. The battery is great, seems to last about 10h in a charge while writing and browsing and using discord.
I don’t care how fast it is as long as it continues to have those mediocre low travel keyboards. They’re better than their worst keyboards from the 2018 era but they still aren’t good
I also have the XPS 17 with i7-12700H and 4K screen. I'm able to get 8-10 hours of work (compiling Rust, several docker containers on the run, at least a dozens tabs, etc...).
I'm on ArchLinux with a window manager and no desktop whatsoever. I also disabled the GPU. I wonder if there is something draining your battery on the background.
Thanks for the real-world numbers! This is good for my emotional state, as I've switched from Windows to a MB Pro solely because of the battery life and performance of the M1 Pro chip.
We all wish intel could release a processor with similar power consumption to the m1, but it's not like they just overlooked battery life. x86 is just fundamentally not competitive with arm on that front, unfortunately. The only advantage it seems to have from an architecture standpoint is raw performance
Meanwhile in a server, I'm starting to worry that as I work to flush performance issues out of the system, especially latency problems, that I won't see the sort of perf numbers I'm expecting because the fact that the machines are running well below 50% CPU utilization are likely to dogleg the moment it hits the thermal limits, and I have no way to predict that, short of maybe collecting CPU temperature telemetry. Not only might my changes not translate into 1:1 or even 1.2:1 gains, they might be zero gains and or just raise the CPU utilization.
Magical CPU utilization also takes away one of the useful metrics for autoscaling servers. Thermal throttling opens the door to outsized fishtailing scenarios.
Tested an i7 1260p recently and it ran like a beast. What's interesting is they have a discrete GPU now which is the Intel Arc that should take a lot of the load off the CPU.
The coolest thing isn't just the cpu upgrade, but that the whole mainboard is something like a standard. Other laptops have weird shaped boards just for that one model.
If this 'standard' takes off it could start an entire ecosystem of modular hardware. That's exciting. I'm hoping for a 'Framework surface', a 'Framework tv', a 'Framework 400', etc.
You're in a startup-biased forum. If you don't like it as a hobby, how about thinking how the openness is good for business? Your buddy can start a company making Framework-compatible tablets, or a mini server rack, or Framework-compatible keyboards (ortholinear layout please!), etc.
I'd pay very very good money for a Framework router or a Framework TV...
Imagine hardware that you control, that you can upgrade piecemeal when something gets out of date. Besides the eco friendliness, it just sounds... nice?
I went from saying, "if only they had an AMD" to getting a different machine in ARM and now im saying "if only they had ARM".
I never realized how much heat, vibration, air blowers negatively affected me before. A framework laptop with some type of ARM or ARM-like cpu could do a lot with the space savings on cooling.
I've played with an AMD 6800u laptop and I would say it's the ideal x86 laptop chip right now. Normal usage at the 12w mode or light gaming at 20w mode was super impressive. Even though 12th Gen Intel chips have made great performance gains Intel is still relying on unsustainable high turbo boosts to get the benchmark numbers.
I just recently stumbled across these two otherwise identical laptops.
Exactly. Not just 1 model but you can compare identical ThinkPad models as well and AMD versions have significantly better battery life. Not only that, even last year's identical models with 11th Gen Intel CPUs have better battery life than their new 12th Gen models. So Intel's power efficiency actually decreased compared to last year, even though they introduced the hybrid architecture.
In some instances Alder Lake CPUs slightly edge out Apple's CPUs in performance, but with a terrible battery life (~4 hours of real world battery life vs >10 hours on Apple laptops) and noisy fans.
AMD on the other hand seems to have focused on getting a good balance this year. 6800U models have significant performance improvements over the last gen while improving the battery life as well.
I am thinking of buying a new laptop this year, but I am waiting for the AMD models. For anyone interested, these are the models I am waiting for:
There are data sheets and then there is real life. People were raving about AMD laptops before. I got a ThinkPad T14 AMD with a 4750U. The stated psref battery life is MobileMark 2014: 14 hr, MobileMark 2018: 11.5 hr. In practice that was more like 7 hours on Windows (with a mild workload) and 3 hours on Linux. The CPU performance was merely ok and I could get the fans blowing quite quickly.
Life with the laptop was quite miserable. There were many other paper cuts and despite being Linux-certified, I used Windows with WSL most of the time since basic things like suspend would at least work relatively reliably.
I sold the T14 after 7 months (thank deity that there was a chip shortage, so I could recoup quite a lot of the 1400 Euro I paid for the T14), got a MacBook Air M1 and didn't look back. It's a world of difference.
That does not match my own experience with a very similar t14s equipped with same CPU. I get much better battery life than 3 hours on Linux even with moderate workload (compiling, browsing, etc.). I often use it on the go and it’s been pretty great.
Lenovo ThinkPad T14 AMD Gen 1 here.
My experience is totally different, using Fedora 36, everything works!
Battery life I have no exact numbers, but definitely way more than 3 hours.
Within a year the kernel should support proper Power management on these new AMD systems. I have a year old 5800H that went from 3 hours to 11 hours battery life in typical use, just due to kernel version updates. It's like a new system.
It is such a shame that certain major distros don't ship latest kernels. I understand also shipping LTS kernels for customers/users that want the stability, but the default for desktops and install media should be a really recent kernel.
I'm also on a fully-specced T14 Gen 1 AMD with the same CPU (4750u) and I get a full day's use out of it under Fedora 36. And CPU performance is way more than OK for my use-cases. I'm curious what you were doing with it.
Something was screwed up, very likely on Windows, definitely on Linux. My 5800H gets 9 hours of battery life on Linux with KDE (full glass blur effect, IntelliJ, Firefox, etc...), without any undervolting or underclocking, at ~80% battery, 1440p60 15.6 inch screen. And about a third of the power consumption comes from the desktop class SSD I put into it.
Since this is Hacker News and a lot of people here runs Linux, I want to remind everybody to hold your horses before you tested Linux on these machines.
The machine that I'm currently using comes with a Ryzen 6800H CPU and LPDDR5-6400 RAMs, made by Lenovo. On Linux, the builtin keyboard don't work because of IRQ problems (See [1], a fix is also available at [2]), and it constantly spams out "mce: [Hardware Error]: Machine check events logged" messages.
If you read the code in [2], the patch basically disables IRQ override for every new Ryzen machines (`boot_cpu_has(X86_FEATURE_ZEN)`). Based on that, I assume every new Ryzen CPU has the same problem(???)
Edit: wait... blaming it on the CPU might be unjust, it's more like a kernel adoption problem.
Luckily, compiling Linux kernel with 16 threads is not really too big of a struggle, you can just apply the patch manually every time the kernel updates :-) :-| :-(
> On Linux, the builtin keyboard don't work because of IRQ problems (See [1]
The exact problem is that an IRQ/ACPI workaround is not needed anymore on modern Zen platforms, and it (the workaround) now breaks compatibility. It's already in linux master, and will be released for v6.0
Commit message of the fix:
commit 9946e39fe8d0a5da9eb947d8e40a7ef204ba016e
Author: Chuanhong Guo <gch981213@gmail.com>
ACPI: resource: skip IRQ override on AMD Zen platforms
IRQ override isn't needed on modern AMD Zen systems.
There's an active low keyboard IRQ on AMD Ryzen 6000 and it will stay
this way on newer platforms. This IRQ override breaks keyboards for
almost all Ryzen 6000 laptops currently on the market.
Skip this IRQ override for all AMD Zen platforms because this IRQ
override is supposed to be a workaround for buggy ACPI DSDT and we can't
have a long list of all future AMD CPUs/Laptops in the kernel code.
If a device with buggy ACPI DSDT shows up, a separated list containing
just them should be created.
Some vendors may only 'test' as far as getting the compiler to produce something without errors. So, first thing an end-user/consumer/enterprise customer can do when encountering a new platform is to run the platform through https://uefi.org/testtools, in particular the FWTS/Firmware Test Suite and SCT/ACPI Self-Certification Test, and hold the vendor accountable. Chances are the vendor already has a member on the board of the Unified Extensible Firmware Interface Forum.
Here's a user report from back in April from a user running w/ 5.17.x on a 2022 Asus Zephyrus G14 w/ an 6800HS w/o keyboard issues so I don't think it affects all laptops running Ryzen 6000. https://www.reddit.com/r/linuxhardware/comments/u5p1rs/zephy...
I'm on T14 Gen 1 AMD with Fedora with the latest BIOS for the past 2 years and hasn't really experienced any issues. Perhaps I'm missing something here. Care to elaborate?
Thanks. The posts says all AMD-based ThinkPads are affected. And I can't confirm that with both of my T14 Gen 1s which I leave on S3 for multiple days. I haven't tried the S0-something yet, but will do. It looks like you did way more research than me, so I believe that it's a real issue. Perhaps it's down to individual component combination, like planar revision, battery, etc.?
If you don't mind, can you test Ubuntu or Fedora to see if the issue is present?
The reason I'm asking, according to a Lenovo talk from DebConf22 [0], the guy said it takes some time for patches to reach upstream, but with Fedora and Ubuntu they have some connections to shortcut some patches before they reach upstream.
I've already setup production environment on this machine, it's not really easy to test Ubuntu on it. Fedora on the other hand, is the exact distro I'm using.
Sad news is, according to my test, the latest kernel Fedora 36 offered, which is kernel-5.18.16-200.fc36.x86_64, does not come with the keyboard (IRQ override) fix.
Another thing is the mce errors, or more specifically, errors similar to:
It was never fixed by both patched and unpatched kernel.
Of course, those were just the most annoying two. There are many smaller problems such as: 1) Screen won't turn off after timeout if an external display is plugged in via HDMI, 2) Linux S2 (Suspend-to-RAM) never wakes up, 3) Builtin mic don't work, 4) Fingerprint scanner crashes when you put your finger on.
I guess it takes time for those engineers in Lenovo to address those problems, and Lenovo is not a Linux-friendly company (they are more like a Linux-meh company).
Framework on the other hand, cares about Linux more. Sadly they don't operate in my country :(
I have been using a 6800U on Linux for the past two weeks. With kernel 5.18 almost everything is supported out of the box. The few issues I have:
- Kernel logs show that the GPU driver crashes from time to time. When it happens the screen freezes for a few seconds but recovers.
- HVEC hardware decoder gives green artifacts on VLC using VA-API.
- It seems not to support advanced power management, on battery the CPU will happily boost to 4.7 GHz which in my opinion makes little sense. AMD has been steadily pushing work related to power management so I expect it to improve. As it stands total power consumption on battery averages 5.1 W.
AMD does automatically switch to a "power saving" profile on battery that lowers power consumption significantly vs on charger, but if you want, you can tweak even further with a fantastic tool: https://github.com/FlyGoat/RyzenAdj - it's a simple CLI util so easy to set up w/ udev rules to run automatically. On my old Ryzen laptop, I have it set to run `ryzenadj -f 48` when I go on battery, which limits the temperature below my laptop's fan hysteresis for a very efficient and totally silent system, and `ryzenadj -f 95` to set it back, but you can also adjust most power and clock parameters (not everything works on Ryzen 6000 yet, but the basics should). The docs/wiki are also great and has lots of charts illustrating what each setting does: https://github.com/FlyGoat/RyzenAdj/wiki/Renoir-Tuning-Guide
The other tool you could check out (and use in conjunction or instead of ryzenadj) is a tool called auto-cpufreq: https://github.com/AdnanHodzic/auto-cpufreq - it lets you switch governor and turbo/clock speeds automatically base on battery if you're just looking for a simpler way to set that.
At least on most ThinkPads, the AMD versions lack Thunderbolt 3/4/ USB 4.0 Full Feature (e.g. including TB) so if I have a Thunderbolt docking station, it will probably not work. Nobody confirmed to me, what would work in such a setting. Will Lenovo/ AMD provide USB 4 upgrade using firmware?
Of course, the next question is the potentially asymmetric RAM on the ThinkPads with one channel soldered and one SO-DIMM slot, e.g. the 48 GB version. Will it use dual-channel until ~32 GB and then single-channel? How will the GPU perform, e.g. will it use only one slot in such a configuration. So many questions...
Formally yes, but it isn't listed e.g. in the ThinkPad T14 Gen3 (AMD) specs as supported, they are all USB 3.2 only. No TB, no USB 4. TB 3 seems to be optional in the USB 4 spec, as it would need to support 40 Gbps speeds and not "only" 20 Gbps.
The 6800U models on the market just show that the integration by the OEM is far more important than the choice of CPU. The Asus Zenbook S 13 manages to hit 60° on its bottom case under sustained load, which is an impractical temperature. The Lenovo Yoga 9i which on paper uses a hotter Intel CPU runs cooler, quieter, and longer while also beating the Zenbook on benchmarks. So you really have to look at the entire package.
I don't doubt those benefits are real, but I recently bought a Samsung Galaxy Book with Intel 12th gen and integrated graphics, and it is actually fantastic. Super thin, lightweight, silent, and low heat, while still boasting 12 extremely fast cores. And it was very affordable. It is very weird for me to buy a laptop where I have almost no complaints... and Samsung did a splendid job with all the fit and finish. I'm sure a lot of it is in fact thanks to the non-CPU upgrades like DDR5, since CPU is rarely the bottleneck, but it is all the same to me.
This to me sounds like the experience people were praising the M1 chips for, and tbh Intel actually delivered for once. I say this as someone who had been solidly in the AMD camp for a long time. Thanks to competition, Intel now has to be good. Weird!
Gen 12 machine with low heat is an exception rather than the norm sadly. It must be possible, since there are a few machine out there that manage to do it, but most machines right now are quite hot and has low battery life (compared to 11th gen), especially the P-series in thin-and-light form factor (I am writing this comment from Lenovo X1 Carbon Gen 10 with 1280P, which can't even match the Cinebench score of the frame.work)
I am able to get 4 or 5 hours of intensive programming done on battery power... running multiple IDEs, Android emulators, browsers, etc. I've never tried it with a normal usage pattern because I only use battery power for working.
I used to think this as well, but the all day battery life on M1 really made me realize how much of a mental tether outlets used to be. It seems pretty minor, but not worrying about getting a seat at the {coffee shop/airport/lounge} next to an outlet makes working on the go a much more attractive option now.
I guess so, but that's another thing you need to remember to charge. I feel like that shifts the mental tether from, "I hope there's an outlet", to, "I hope my power bank has enough juice".
I travel a lot. That’s going to be increasing by the end of the year as my wife and I live the digital nomad life flying across the US. The freedom of not having to worry about your battery life for a day of real world use is amazing. I’ve only had that over the years with an iPad before ARM Macs.
Not to mention your laptop not getting hot enough to boil eggs or sounding like a 747 when you open too ,any Chrome tabs.
The freedom of running Docker on a laptop without heating it to 60c is too tempting, unfortunately. My Macbook will get 2-3 hours of battery life over my Thinkpad, but my Thinkpad also doesn't burn my palms with regular use. It's a game of tradeoffs, but I rarely find myself reaching for the Macbook if I've got a desktop close by.
Maybe that's a skewed perspective though. I have yet to try the Pro/Max chips seriously (or the mobile 12th gen chips, incidentally), but I don't really find myself pining for them either. If my laptop is too limiting, I hop on Tailscale and dev on my desktop.
Man, the heating on my MacBook is insane. I have a Thinkpad with on board Nvidia card, beefy CPU, etc, and it can play gfx intense games at 4k and still sit in my lap comfortably.
Meanwhile my MacBook from work gets risking-burns-hot from just screensharing on a Zoom call.
I am not sure how accurate this is - I have run my Macbook M1 in clamshell mode for the past year and have almost never heard the fans, and never even felt heat coming off of it, despite having Docker / JetBrains full IDEs open
Running a local dev environment tends to hit ~75c for me on Apple Silicon, I think across 4 or 5 containers. I've also this same environment on my Thinkpad T460s (with a paltry i7 6600u), which settles out around 45-50c.
FWIW I get about the same battery life on Linux with an AMD laptop. It's not as power efficient under load but not having to run a VM for docker helps a lot.
When depending on solar it’s pretty important to be low power, m1 drawing 10-20 watts vs something else drawing 60-80 can make a big difference on batteries
The reality distortion field of apple fans is really getting out of hand. Portable laptops (i.e. not the desktop substitude game machines) have not been using 60W for years. In fact my X1 from 2016 typically shows between 6 and 15W in powertop depending on load and has been lasting >8h on linux (it's less now as the battery has aged a bit).
I used to think like this. But after using an M1 Macbook Pro, I've changed my opinion. This is true all-day battery. You never have to think about battery anymore. It makes a big difference to my workflow, even though I'm literally 24 hours at home, never more than a few feet away from a wall socket.
I'm willing to put up with all the downsides of this laptop and the OS, just because of the battery life.
I just got an M1 Pro 14” 10c 32GB RAM and while the battery life is great compared to anything else I’ve used, it’s not as amazing as I would have hoped.
I get around 6 hours from 100% with a light Webpack dev workflow + Docker for Mac running in the background (arm64 postgres and redis containers).
Is that normal? powermetrics shows package power bouncing around 500mW - 10 watts when compiling, which would suggest I should be getting a lot more battery life. Is there a utility to reliably check power consumption of the whole system? Stats.app seems to think my system idles around 5 watts and goes to 40 watts when compiling which is much higher than the package power and my screen is only 1/3 brightness.
That is actually terrible. I put a new battery into my 2012 macbook and I get about that much battery life with similar usage (although my compute is done on a server).
> That is actually terrible. I put a new battery into my 2012 macbook and I get about that much battery life with similar usage (although my compute is done on a server).
My compute[1] is done locally only. My personal laptop is an HP Victus, it gets around 7 hours because I don't compute locally, which makes the M1 look okay in comparison.
I always wonder about these people who claim the M1 can last a full day without needing to be charged - they can't be developing on the thing, can they? If they're not doing computations on the thing, then the battery life is quite similar to all other non-gaming laptops in that price range.
[1] If by 'compute' you mean compiling code, running it, debugging it, etc
The P-series CPU this article is advertising is not the one you want if you are concerned about battery life. The P CPUs consume as much energy as is practical, in their quest to go faster. These CPUs use 28-64W when active, while their slower U-series counterparts use 15-55W. There was a Dell laptop on Notebookcheck yesterday that had a measured 13-hour battery life and a U-series CPU.
ETA: Comparable laptops reviewed by that site get 8hr runtime from a 71Wh battery and the P CPU, or 13 hours from a 60Wh battery and the U CPU.
> I never realized how much heat, vibration, air blowers negatively affected me before.
It's like a constant drilling sound that gets into your brain and makes thoughts flow like through the mud in the full sunlight.
For that reason, despite that I don't like Apple and I am not a fan of macOS, I decided to buy M1 MBP.
This is a lifechanging experience, that laptop. You can even put it on low power mode and won't lose much perfomance and it is absolutely silent.
Even if you run a prolonged heavy task and fans kick in, you will barely hear them (ha hope it won't get worse over time and that there will be a way to disconnect them if that is going to be a problem).
Oh and no coil whine. Even if my XPS 15 miraculously don't have fans on, you can be sure their noise is inconveniently replaced by distinct coil whine that is just as annoying as fans.
I am probably going to buy another macbook, this time the Air one just for leisure and learning.
I'm honestly impressed that so many people in these threads work in environments quiet enough where the fan is the largest source of annoyance. If I'm inside its drowned out by my window AC unit. If I'm outside on the patio its drowned out by my neighbor's window AC unit. If I take it to work then its the hvac vent over my desk that's the dominant source of noise, followed by the construction site across the street. Usually I use third party tools to pin my fan at max speed when I'm doing anything because it keeps the keyboard cooler.
My XPS didn't blow. My MacBook Pros (Intel) always did. As others have shown heat management in Intel MacBook Pros were abysmal and not comparable to other Intel Laptops. Yes, the M1 is a huge power efficiency jump forward, would love affordable ARM desktop machines, but a lot of the MacBook Pro (Intel) problems are from very bad heat management.
[Edit] My understanding is, that a lot of power efficiency and performance comes from the large cache - not only ARM - which is more efficient than main memory and faster - also see AMD X3D. (per core AFAIK) Ryzen 7 L1/L2: 32/512, M1 L1/L2 192/3072
Really the only ones in my opinion who have a chance is nvidia since they have the experience and know how to deliver chiplets on cutting edge processes. They’ve been working on arm cores in the past. It might be several years but I’d be surprised if they don’t try once Qualcomm’s exclusive deal with windows ends
Not too many companies have a market cap over a trillion dollars, and if you don't then you can forget about competing on the upcoming process nodes that will require enormous capital.
Samsung's internal processor division seems to be sucking. Google is trying it half-heartedly like they do with nearly everything. Nvidia and AMD haven't really tried to compete for the desktop ARM market, and Intel certainly hasn't. Qualcomm has rolled their mobile chips for laptops, but they remain mobile chips in a larger form factor. So Apple is in a class of its own for desktop-grade ARM CPUs.
It’s not drive. Just investment. There is no demand for high performance desktop/laptop ARM CPUs outside Apple.
Chromebooks are mostly low price, and may lack the software to truly push the chips often (without dropping into dev mode).
Windows on ARM seems to be a near failure. Very little native software and x86(-64) emulation is far slower than Rosetta. May not matter, they seem to be low cost machines anyway.
Linux just isn’t a big enough consumer market to drive it.
Server seems to be the big front for ARM, but that’s a very different chip. So I’m under the impression ARM laptops (outside Apple) are just phone chips, which don’t compete on the same level.
I have absolutely no data or information to back this up, but I predict a RISC-V based Framework within the next 5 years. And I'll be one of the first customers to buy one.
First gen unlikely to be low(er) power though surely? I can see this emerging, but power would have to lie in the space of "premature optimisation" risks. For intel, AMD & ARM they're all multiple generations in, have understood their exposure of risk to parasitic effects in the VLSI of choice, have fab at viable defect rate. RISC-V is more played with in FPGA or emulation than otherwise. (happy to be corrected, but I think this remains true)
Yes I think frame.work is a good and reputable maker. They are indeed still young, but they've made and kept a number of big early promises, and have been consistently shipping. There are some bugs/quality concerns but I'm confident they'll work them out.
I've bought hardware before though from companies I'd never heard of, and either never received it or when I did it was much poorer quality than advertised or was so long after the order that I'd nearly forgotten about it. Would hate for that to happen with xcalibyte
Apples M1 only worked because everything is in one chip. Very different from the goals of framework. There aren't any vendors offering M1 competitors either.
Because there’s no evidence for the claim, and the evidence is mostly “Apple has created a huge and complex chip”?
And what does “Everything is in one chip” even mean? Because the memory certainly isn’t, it’s soldered on the package but it’s not part of the package, it just doesn’t take additional room on the main board. And there are a bunch of other chips on the mainboard
Finally, it’s pretty much just following mobile / phone chip SoC design, so any other manufacturer could do the same, if they wanted to create a giant and expensive SOC.
And I want to be really clear on the “giant and expensive” part: the M1 family is the sort of scale you usually see on giant workstation or server chips, the M1 Pro has 33.7 billion transistors, that’s more than a Threadripper 5995WX, a $6500 64 cores CPU. The M1 is just short of the 5950X’s transistor count (16 billions to 19.2), the M2 is above (~20).
> And what does “Everything is in one chip” even mean?
I guess he's mainly thinking of GPU, which isn't unique. But there's not that many SoCs with that amount of power in one SoC. So it's close to competing with alternatives with discrete GPUs, which does increase power consumption.
I believe it has integrated Flash controller too, which is very unique for a laptop/desktop chip, no?
> Because the memory certainly isn’t, it’s soldered on the package but it’s not part of the package, it just doesn’t take additional room on the main board. And there are a bunch of other chips on the mainboard
It's on the package so it can be as close to the SoC as possible. That decreases the capacitance of the traces, which decreases power consumption.
It's not stacked on top of the SoC, which might have been even better (but harder to cool), but it's close.
> Finally, it’s pretty much just following mobile / phone chip SoC design, so any other manufacturer could do the same, if they wanted to create a giant and expensive SOC.
Uh, yeah, anyone could copy the M1 for a laptop/desktop product. But they haven't exactly done that yet have they? That's kind of the point?
> the M1 family is the sort of scale you usually see on giant workstation or server chips
Yeah, which again, almost never packs the kind of functionality Apple does into the M1.
With that many transistors, on such an advanced process, you're going to have a lot of leakage currents, so Apple must have put an impressive amount of work into power management.
I mean, it's not just about being an SoC with lots of things packed within the chip or extremely close to the cheap (memory). No. It's apparent that they've focused on across the entire design process. Even the choice of ARM factors into that (fewer transistors needed for instruction decoding). But I wouldn't say the original comment is completely wrong.
Even on the desktop processors, there are onboard lanes dedicated to the NVMe itself, and the entire processor (including NVMe) is capable of booting without a supporting chipset at all - the "X300 chipset" is actually not a chipset at all, it is just using the SOC itself without an external chipset, and you do not lose NVMe functionality.
Not sure if you are using some really weird meaning of "NVMe controller" that doesn't match what anyone else means?
I meant the controller that manages the NAND cells, I assumed it would be called NVMe controller. Essentially the IC that the "NVMe controller" in your image talks to.
That would normally be called a "flash controller" and yeah, of course that lives on the SSD.
(unless it doesn't - eMMC doesn't normally have a flash controller and you just do direct writes to the flash cells... as do some IOFusion PCIe cards that are just flash directly attached to PCIe with no controller. Sometimes that functionality is just software-based. Flash cards (eg microSD) usually also do not have a flash controller directly either.)
Anyway, it's true though that Apple does push the flash controller functionality into the SOC while AMD does not, Apple implements their SSD in a non-standard fashion, that's why it's not compatible with off-the-shelf drives. The flash is just flash on a board, I don't even think it's NVMe compatible at all either.
So if you want to be maximally pedantic... neither does Apple implement onboard NVMe controllers, just flash controllers ;)
FYI, all current flash card formats have the equivalent of an SSD controller, implementing a Flash Translation Layer. Exposing raw NAND was somewhat viable in the very early days when everything was using large SLC memory cells (see SmartMedia and xD-Picture Card), but no current format could remain usable for long without wear leveling. If you can use standard filesystems and access the drive or card from multiple operating systems, then it must be providing its own FTL rather than relying on the host software for wear leveling.
The above also applies to eMMC and UFS storage as found in devices like smartphones.
AMD processors provide PCIe lanes, some of which are intended for use with NVMe SSDs but nothing on the chip actually implements or understands NVMe and the lanes used for the SSD in a typical system design are still fully generic PCIe lanes.
> nothing on the chip actually implements or understands NVMe
pretty sure that's false as well... can an X300 system boot a NVMe drive? I'd assume yes. It does that with a UEFI which lives... where? Oh right, on the chip.
Most NVMe SSDs don't provide legacy option roms anymore either.
> It does that with a UEFI which lives... where? Oh right, on the chip.
No, UEFI is firmware stored on a flash chip on the motherboard, usually connected to the CPU through the LPC bus. CPUs do not have any nonvolatile storage onboard (unless you count eFuses).
Lowering the power of an intel/AMD CPU enough to passively cool it would yield way more performance than currently commercially available arm CPUs. (Obviously excluding CPUs that aren't purchasable like Apple's)
After Apples announcement of M1, I feel like it is mandatory for such a laptop test to discuss power per watt and for how long you can game on the 11 vs 12th gen processor.
I also feel like it is worth mentioning that 12th gen laptop is priced ~ $150 over the 11th gen.
If you're coding against an online build system and 11 last an extra hour (or whatever), it's a no-brainer to stick with the old one.
Im generally not that concerned about power consumption. My laptop almost never is unplugged for more than a couple hours. I understand that there are many others who don’t use their computer like I do, but personally the only negative effect of power consumption is that it costs an extra $10 a year.
I think not everyone is going to be sensitive to this, but apart from battery life, it's super nice having a computer that's not spewing out heat or running the fans loudly.
Lots of people just don't game at all. Also, a lot of people like me prefer different gaming and work setup (and different rooms) in order to achieve better work-life balance. I build games on my M1 machine, just don't play on it.
How is M1 for game dev? I've been thinking of getting one for my own personal laptop, but have had a few concerns, mostly just due to the issues of building software on a totally different platform to the majority of your target users (which'll be Windows, x86). I used to have some compatibility worries but I'd guess they've gone away now.
If you are using a game engine like Unity you should not worry at all. If you need to access Windows specific APIs, of course it is a different story.
A Macbook is a perfect machine for developers: you can build for android and iOS, has a great screen, perfect touchpad, finally fixed keyboard and ports.
I use Godot at the moment, but have been experimenting with frameworks like Bevy and the like. I doubt I need immediate access to windows specific api as I currently work on Fedora without any issues, though from what your saying it sounds like you mean more mobile-game dev than desktop.
To be honest the last time I used a mac was I think a decade ago so they're just so unknown to me at the moment, but everyone else seems to love them so I'm quite tempted
What are the specs of your machine?
If you've tried them, how have have other engines such as Unreal fared?
Do you primarily do 2D or 3D development?
How long are the build times?
It depends on the type of game you're shipping. Not every library has caught up to offer a mac arm binary. You still see some Rosetta problems here and there.
You can't dual boot so you can't natively run the game you're working on. The graphics are integrated so you can't test on a PC GPU. VR dev doesn't work at all.
Lots of small things but if you also have a PC its not so bad unless you need some library that just doesn't work.
I game on my m1 macs. Maybe not the latest and greatest 3d games but for me that’s ok. Have a ps5 for that. there are a ton of older ones that just work, and even many newer ones. Currently playing baulders gate EE
This is not true. I have an Macbook Pro 16 Inch M1 Max and I play a ton of games. Both Dota 2 and World of Warcraft runs without issues, you just need to do some tweaking with the settings and also framerate cap the game so you don't end up thermal throttling which introduces stuttering.
I can't believe these M1s throttle! My 2012 era computers never throttled because apparently apple had better thermals back then. I had a more recent intel mac and I sent it back because of the throttling, biding my time for M1s to come down in price but now I see that won't be worth it.
You can't play the latest and greatest (graphics-intensive games at least), but there are many titles that work just fine, and the performance of the x86 translation is surprisingly good. There are even a handful of games with native builds, like World of Warcraft and Disco Elysium.
As per most reviews 12th gen is more power efficient. Particularly for the use cases you have mentioned, as P cores wouldn't be working most of the time and E core are much more power efficient.
I think for those who primarily work on AC and do not mind increased power draw, 12th Gen is a good choice. As a software developer, I appreciate a few seconds shaved off here and there. With good thermal design, 12th Gen laptops can be silent and cool (I've never heard my MSI GT77 spin up the fans, unless I tell it to with a dedicated hardware button). I understand that people have different use cases and for lot of customers laptops need to be lightweight. I'm okay with the bulk and weight.
Just ran Phoronix Linux kernel compilation test (defconf), I get 74.68 sec on my laptop, on par with desktop Intel Core i7-12700K, but I can carry my "desktop" around. :) In comparison, Apple M2 result is 416 sec, AMD Ryzen 9 5900HX is 103 sec. It's not much, but it's compound gains if you compile and test a lot.
Do you think the very broad loose culture of macOS users (developers/creatives - coffee shops, nomadic etc) vs Windows (most corporate stuff - workstations, meetings etc) could be some factor in how they are optimised?
I’m rather disappointed in 12th Gen for mobile. I have a AMD 5800h Lenovo laptop and a 12700 Dell. The dell stutters, fans spin, etc. the Lenovo is just rock solid and fast. Never stutters or slows down. Fans spin up only when playing games.
I wish framework would do an AMD laptop. But Linus put them in touch with AMD and they done nothing.
Building a new laptop from scratch is time-taking and costly. So Framework may rightly be spending their time on optimising product, reducing cost and reaching more countries.
I would love to see them add an AMD (and even ARM) option, but right now I applaud them for being tight-lipped about any future developments. What would they gain by acknowledging that they’re entertaining such ideas? People postponing their purchases, endlessly asking for ETA updates, getting mad when there are delays, etc. They’re much better off saying nothing and doing the work in the background. I also would understand if they choose not to offer a second CPU platform at this point. From their standpoint, it’s hugely expensive and time consuming to develop, potentially doubles their QA and support costs, makes their supply chain and inventory story more complicated, and probably makes their relationship with Intel a little worse. And realistically, they won’t get double the sales from just having an AMD option. So I would be pleasantly surprised if they offer an AMD option within 3 years, but I’m not holding my breath.
The Lenovo Laptops I have are Legion 7 2021 (5800H / 3070), X1 Extreme Gen 1 (8th gen / 1050ti) vs Dell XPS 15 (12700 / 3050ti)
Both Lenovos are rock solid, not laggy or stuttery.
But I had a Dell XPS 15 (2020) model from work, I gave it back because idling in Windows made the fans spin...
I got a newer Dell XPS 15 (2022) for work (new job) and its great, but its stuttery and the fans spin under light loads. (the screen is amazing tho, i love the screen)
Yes the XPS series is a bit of a cooling nightmare but for example the Precision 3000/7000 series are quite a different story.
(We have around 10 assorted Dell laptops from all the different models and we don’t like the XPS much by comparison, and wouldn’t ever buy another. Aside from thermal issues, for whatever reason the XPS drivers are very beta and always crashing compared with the Precision series which is just rock solid)
What I don't like about framework notebooks is that they seem to be mostly interested in speed, which is why they seemed to choose these P-series CPUs. With a notebook, I'm much more interested in energy consumption (how long can I work off grid), emissions like fan noise and the like. Why no U-series CPUs?
The U and P series parts are actually the same dies on 12th Gen, but binned. If you set the Power Mode to Best Power Efficiency on Windows or the equivalent power limiting in Linux, you can make a P behave pretty similarly to what a U would (with the Intel DTT settings we used).
You are correct, I misremembered this. The packages are pin compatible, but there are different 2+8 and 6+8 dies this generation. It was 11th Gen where the 15W and 28W were the same die.
Restricting TDP is generally an option on laptops if you really wanted to. Not sure if Framework provides any out of the box TDP switching but software like ThrottleStop can do it.
Also on Linux most distros have thermald installed and you can change the config to limit the max temperature to defacto throttle power usage. Alternatively, you can modify the RAPL limits to cap power usage directly to taste: https://community.frame.work/t/thermal-management-on-linux-w...
Yeah. I tried locking some of my older CPUs to, say, 700MHz. Still entirely usable unless I started compiling something, say.
Issue is Intel tries to race to halt which demonstrably saves more power on average. The power consumption of your chip probably is some intrinsic quality related to how the package and die are set up.
I'd like to point out that Intel allows you to tune the cores for whatever profile you want. Sure, it's not "same performance, less noise", but you can get something like an 80/20 (80% performance, 20% noise) by just scaling back the core frequencies a little. I tune my intel machine down to 8x3.2 Ghz, and can run without fans (on small form factor PC), but sometimes want the full 4+Ghz, but it's loud.
This can be done application-specific, so when you launch a big game or something, it'll spin cores up to max frequency, for example.
Could you please be more specific? Are you talking about Windows or Linux? Do you mind giving some references for how to scale core frequencies application-wise? Thanks.
There's a very simple menu system to scale up and down the core frequencies, and to select application execution profiles. It comes with internal temp and voltage monitors, and built-in stress tests to benchmark.
My main tips are to remove thermal boost, remove core scaling (the heterogeneous frequencies it assigns when only a few cores are active vs all), and to cap all cores to 3-4Ghz, depending on the quality of your passive cooling.
No bios adjustments needed, no restarts required, and built-in watchdog to revert if things go poorly. Really nice.
Recent, huge bug though: On some platforms, it requires disabling visualization, which limits use of the WSL. That had better be fixed ASAP, it's a big deal.
The M1 Air is a wonder as far as I'm concerned. It's very thin and light, fanless, yet I can happily do java dev on it all day and I'm not lacking in capability or waiting around for build and test runs. It rocks.
I have docker-desktop for the things I need it for - building the odd linux native image, or running testcontainers for postgres, same as we do when building on linux - and it works great.
I have an M1 Air where I max out the 16G RAM all the time. It works because swap works painlessly for me, but I would have been really, really happy with 24G (or better 32G). It works but I have unfortunately no breathing room in RAM.
Often, I use 11-18G, rarely I use 20-50G.
Definitely go for M2 Air if you like the form factor (I prefer the M1 Air’s form factor though).
I've got a maxed out ram M2 that I use for development.
I run Linux in Parallels with 16gb allocated and it runs great at around 50% memory pressure. MacOS compresses the memory that Linux doesn't use and it runs amazing.
We have slightly different use cases as I prefer developing in Linux as opposed to MacOS, but I'd imagine you would be fine depending on how actually memory intensive your containers are?
The thermal throttling the reviews complain about isn't a problem for me, though I prioritize power usage over raw power since I often run it off of solar.
So far I haven't noticed any issues with the 16GB in my air, but then I haven't really been looking for them either. I run a couple of containers in docker desktop, my IDE (IntelliJ) and its builds/test-runs etc, a browser with a doze or so tabs, but little else of any particular 'weight'.
I have heard anecdotal reports of those on 8GB machines facing the spinning beachball from time to time. It's never happened to me.
The only person doing that kind of content is Alex Ziskind. He compares various dev tools, build times, ML tools across Apple Silicon, Intel or Mac and PCs.
I think a lot of the macbook air's popularity comes from it being the cheapest macbook. Some people want an apple and dont have 2-3k to spend on a laptop.
I've got a sleek option because lugging around a brick is a miserable experience. There's a qualitative difference between the XPS 13 that I can whip out on the train, and a desktop replacement laptop that is heavy and big enough to be a bother and barely fits in my backpack.
Power isn't really an issue - I'm never far from an outlet, and I could carry an external battery in a backpack pocket if it really bothered me.
Then make that a segment and stop enforcing your unrealistic expectations on the rest of us finally.
Oh wait, it already is but you don’t. I routinely engage in comment sections where a highly juiced up desktop CPU is derided for it’s “heat output” and other such nonsense.
Would increasing the RAM really make that big a difference in terms of kernel compilation time? It feels to me that 16GB should be plenty for any sort of caching that will speed things up.
Yeah if you think about it in lower levels. Many optimizations use memoization to save clock speed. Therefore, you almost always need more memory to speed things up.
I think you can use `brd` or `ramdisk` (NOT tmpfs) to eat up 48GB of ram on the newer machine. Just make sure you actually write data to the disks so that the memory is allocated.
That should be an easy way to do an apples to apples comparison and not require shipping any hardware.
Saying that you intended to do something, and in the same breath that you were subsequently unable to do said thing, because you couldn't find what was required is very discouraging for a prospective customer.
Also, "benchmarking" dissimilar systems comes across as marketing scam, scammy marketing, appealing to the deceivable, which some might view as reprehensible
:(
Reprehensible...really? There's lots of that in the world, pretty sure this isn't it. They put the memory difference right there in the stats, it's not like they are hiding it.
Two different people on the team running the benchmark at two different locations (the challenges of being remote-first). To be clear, had we prioritized it, we could have built two equivalent systems for this benchmark at one site. However, we are mostly focused on building products, so most of our blog posts end up being someone having a interesting topic to discuss and pulling together data and assets we have on hand or can create quickly for it.
Yep! We released an announcement around starting shipments of this two weeks ago. Our blog post this week was just a round-up of benchmarks both from reviewers and from us, to respond to common questions we got around performance improvements.
6800U is awesome from what I’ve seen — better efficiency by far, and legit integrated graphics better than Xe — but it is vanishingly rare, even months after announcement.
The scale Intel has in manufacturing mobile CPUs is still unmatched.
That article actually shows %5 worse efficiency over a 5600U in a heavy single threaded workload. It's a bit apples to oranges and there's a lot more to efficiency on a laptop than heavy single core workloads.
Maybe I'm spoiled, but it feels like an article like this would have turned out way better if they contracted Michael Larabel from Phoronix to do an in depth benchmark run with both versions of the laptop comparing multiple type of workloads, temperature, performance per watt (or if they would have used the full Phoronix test suite themselves). Although there is nothing wrong with the current one, it feels a bit... shallow on details.
Just yesterday, I ran a few CPU benchmarks to compare my new System76 Lemur Pro (12th Gen i7-1255U) against my older ThinkPad X1G7 (10th Gen i7-10710U). Both are ultra low-power processors, so it's a fair comparison.
Maybe it's my imagination but it seems like more cores isn't helping performance as much? Does it seem every core after 6 cores is about 1/2 or 1/3rd the performance gain?
I don’t like that we call it 12th Gen Intel. It would be more correct to just call it the 12000 series Intel.
Intel didn’t change CPU generations from 6000-10000 series, they’re all the same generation. Just because the years advance and they refresh their products with increased product numbers doesn’t mean that they’re a new generation of products.
It breaks where generations and numbers mix. While 12xxx and 12xx are Alder Lake, 11xxx is Rocket Lake (based on Ice Lake) and 11xx is Tiger Lake. 10xxx is based on Skylake, 10xx is Ice Lake. That’s where it’s misleading.
The number is only a product number that shows the approximate year of release for these mixed product lines.
Because of this, you can’t say things like “compared to the previous generation” any more. 12xxx is two generations newer than 11xxx. 10xxx is the same generation as 9xxx.
The mainboard is really great. I’m reminded of the i9 NUC “compute element” and I’m disappointed Intel hasn’t really done anything more innovative in these kinds of form factors. Seems like they could easily standardize a swappable module like this for laptops and small computers.
The IBM Keyboard Trackpoint has evolutionary paths towards being a tiny trackball while retracted down, the trackball locks and pops up a functional joystick. The problem with the Trackpoint is it feels superslow, superheavy on the fingertip compared with glidetouch trackpads.
The Trackpoint is in the correct place for the correct finger I use to point. I live in the Linux/BSD World and don't expect good enough support for sensitivity/speed control configuration software from Lenovo at all. And when third parties publish solutions they work briefly and I lose track of them through upgrade cycles. I want a better World with better evolved Trackpoint. The Trackpad takes my fingers off the home row and the context switch in moving my eyes to the keyboard really breaks "flow" of concentration.
The HP EliteBooks (and the new Dev One Linux variant) also have trackpoints.
Just in case anyone's interested in making their own though, I've seen a few DIY trackpoint projects over the years (mostly on custom mechs, not laptop keyboards though). Here's a writeup on one: https://edryd.org/posts/tp-mk/
The problem with many of HP's laptops that have pointing sticks, including the HP Dev One, is that they lack a middle mouse button for scrolling and middle-clicking. All Lenovo TrackPoints have that middle mouse button, which lets you scroll with the TrackPoint when it's held down.
A keyboard with a pointing stick and no middle mouse button is a waste. Hopefully, any pointing stick keyboard for the Framework Laptop will include this button.
> The HP EliteBooks (and the new Dev One Linux variant) also have trackpoints.
Are those supposed to be any good?
I have an EB Gen8 with one, but I find it horrendous. I'd like to know whether it's because I have no idea how to use it, or if my Linux setup is broken, or if the hardware is actually shitty.
I had a company provided 2019 dell at my last job and the trackpoint was barely comparable to that of a 2011 thinkpad. If that's the only trackpoint experience is that some people get I understand why they'd never touch one again.
If I’m working a job where I really need performance, I try my hardest to get a desktop. The last 20 years of hardware development has not replaced the workstation for even light computational workloads.
My biggest frustration with remote work so far is the corporate fixation with laptops. For some types of work, performance isn’t just a convenience, but weeks of lost productivity. If the difference between the 11th and 12th generation of Intel in your laptop really moves the needle for you, you’re using the wrong form factor to do your work.
In the first table, in the first row (11th Gen), the professional column is missing the core count. It currently says "(Up to 4.8 GHz, Cores)" and by https://ark.intel.com/content/www/us/en/ark/products/208664/... it should be "(Up to 4.8 GHz, 4 Cores)".
I'm developing in rust for $dayjob and I absolutely hated it until I got a M1 mac. CPU speed (including single core speed) absolutely made the difference. The compilation speed crossed that threshold between "a little pause" and getting distracted doing something else while waiting for a little incremental compile
I'm also developing in Rust, with a 12th gen i7-1270P. It's an absolute beast; I'd say it's on the range of 2x speedup vs my previous laptop (11th gen i7).
I have no idea about power consumption yet (the laptop is a couple weeks old). On light load, the laptop easily crosses the 8h mark but it won't, certainly, when grinding rust compiles. The M1's power efficiency is on a different league.
I'm not sure why the Framework laptop is getting so much credit for being upgradable - the main board upgrade (https://www.youtube.com/watch?v=mog6T9Rd93I) seems to be an identical process as my Dell Precision 5530.
legit question, does Dell offer a drop-in generational upgrade for your machine? I think the main appeals are:
1) its simple to repair or upgrade. If Dell's design is also simple to work on then that's great!
2) A culture of providing replacement and upgrade parts affordably to the end user. In addition to the I/O modules and replacement parts, so far framework has the upgradeable mainboard, an upgraded top cover with more rigidity, and an improved hinge kit.
I hope this doesn't mean that frame.work is planning to stick with Intel processors even for its next gen laptops? Many of us are eagerly waiting for the AMD models.
On a slightly different note, this is again a good example of how Intel continues to oneup AMD through its business and management practices, even when AMD has better products. Ofcourse AMD can't be imitate Intel in spending money and undercutting them (sometimes illegally), but they should certainly try to copy some of the decent practices of Intel, like for e.g. supporting hardware manufacturers by offering reference designs. China has many Intel tablets because of this. It is easier to develop hardware with Intel processors because Intel offers more support (free and paid). Something AMD needs to focus on more and fix. They really need to work on their PR and sales tactic.
I would have been impressed a decade ago, but this is just an extremely marginal improvement, even if you can trust them, which you know intel, so you cant.
Although CS:GO is an FPS that can run on a potato, the GPU results are interesting. But I would love to see a Framework laptop with a Ryzen APU (I have a Lenovo with a 5700U and it is nothing short of amazing, even though it’s almost two years old by now). Provided they can do Thunderbolt, that is (I rely on that for a number of things).
So.... is the Framework laptop with a 12th Gen core noisy or not? Is it quiet most of the time? Do the fans become noisy during a compile or benchmark? Does it remain relatively cool? What are people's experiences with these laptops? My primary use would be C++ development with frequent large multi-core builds.
Is it far to compare 11th Gen U-series to 12th Gen P-series? Performance is higher, but this comes at the expensive of much higher power draw. I think 11th Gen U-series vs 12th Gen U-series is a more interesting comparison.
XP11.11 (2018) is ancient, current release is from late last year - but more importantly XP12 is coming very soon and going to be vulkan/metal only, and afaik even the Xe intel gpus are lacking drivers.
For some people comparing to Apple is useless - Macbooks could be a thousand times faster and last for a year on battery, they still would be 100% useless to me because they don't run the software I need on a daily basis.
A YouTuber, Luke Miani, gave it a try recently. Of course it doesn't work, but it was interesting to see that mechanically everything fits and the chassis is of course practically identical, but it doesn't work because FU that's why: https://www.youtube.com/watch?v=q_jckhYfGBw
I think 18 days ago you might have said something to the the tune of you’d never do business with Apple again. You changed your mind fast. https://news.ycombinator.com/item?id=32188366
Does it matter? I have a Threadripper 3970x and an M1 Macbook Air. For anything that can't use more than 4 cores, the M1 crushes it. The Threadripper also draws over 100W at idle.
Whatever AMD and Intel have right now are completely irrelevant, and I say that as one of the biggest haters of OS X there is.
I'm not trading in my workstation for a Mac Studio or whatever, though, because I'm guessing AMD's 5nm chips will probably perform even better than the M1/M2. Plus games. Games are nice.
For a site about "hacker" news, this comment is wild to me.
It seems we've truly reached the apex of disposable consumer culture, where even our most powerful technology is something we just throw in the trash and re-purchase when something fails or newer technology becomes available.
I can only hope that companies like Framework can reverse some of this trend. Because as it stands, our already unsustainable culture has gotten to the point where it seems folks no longer remember that anything else is possible, or why we might want to think differently.
I guess "stuff" isn't really the problem with the current society. If you write software for a living, the company that has a 50% faster hack/test/ship cycle is the one that wins the market. It is a shame that you have to throw away a power supply, keyboard, and screen every few years to get that faster cycle, and Framework helps there.
But at the same time, the chips that Framework uses waste valuable electricity and don't perform particularly well. That's just where Intel is at right now. That is likely to change, and Framework is interesting from the standpoint of "when Intel gets their act together, look at our balance of performance and sustainability". That's not where they're at right now, though. Not Frameworks' fault; nobody is selling mobile arm64 chips that have remotely competitive performance. It's the late 1970s again; hopefully the clones show up to kill "IBM" again.
Realistically, people are burning the rainforest to sell pictures of smoking monkeys and burn dead dinosaurs to drive to get coffee every morning. Recycling your laptop's screen every 5 years is not really making a dent into our current planet-killing habits. It feels good to help in a small way, but what feels good is not necessarily meaningful.
> But at the same time, the chips that Framework uses waste valuable electricity
I pretty well guarantee you the small amount of energy saved by operating an M1 is more than made up for by the cost of manufacturing and shipping a new laptop every few years as part of the Apple replacement cycle.
> Recycling your laptop's screen every 5 years
You might want to do a little research about what's involved in upgrading a framework mainboard. Far far more than just the screen is "recycled" when upgrading a machine like that.
And that's ignoring the ability to reuse or resell components outside of the Framework chassis. I fully expect to turn my old mainboard into a server when the time comes to eventually upgrade it.
> It feels good to help in a small way, but what feels good is not necessarily meaningful.
This is a classic thought-killing argument intended to invalidate any kind of individual action or intervention. "Well, what little thing I do has no meaningful impact in the big picture, so why bother."
The answer to that is simple: Government needs to pass laws mandating right to repair. It's long past time that companies like Apple be forced to do the right thing, because it's clear they're not going to do it themselves, particularly when consumers have apparently convinced themselves that their individual choices don't matter.
From the Cinebench scores in the linked post vs some M2 benchmarks, the M2 (score of 1695) is comparable to the bottom tier 12th gen Framework in the single core benchmark (framework scores: [1691, 1723, 1839]. On the multi-core benchmark, the M2 is significantly worse- 8714 vs [9631, 10027, 11076].
The main advantage of the M2s is probably battery life.
Cinebench is a terrible benchmark and heavily favors CPUs that have many slow CPU cores.
Also, Cinebench is not optimized for ARM instructions.
If you are buying a computer to use Cinema4D or gaming, Intel and and AMD will be better. If you're buying a computer to do just about everything else, then Apple is better.
M2 is faster in ST, GPU, AI, and video editing. It's slower on MT. To me, the M2 is overall faster. Especially when unplugged which will cause Alder Lake to significantly drop in performance.
> Especially when unplugged which will cause Alder Lake to significantly drop in performance.
At least in a laptop, it's not just when you unplug the laptop that performance drops. You've also got to have enough volume in the laptop to shoehorn in a major cooling system if you're going to keep the P series processors from throttling.
For instance, Lenovo's thin and light ThinkPad X1 Yoga has two fans but still throttles:
>Unfortunately, the laptop got uncomfortably hot in its Best performance mode during testing, even with light workloads.
>the XPS 13 Plus’ fan was really struggling here because, boy oh boy, did this thing get hot.
After a few hours of regular use (which, in my case, is a dozen or so Chrome tabs with Slack running over top), this laptop was boiling. I was getting uncomfortable keeping my hands on the palm rests and typing on the keyboard. Putting it on my lap was off the table.
In a laptop, I care far more about battery life and the fan noise. However, Intel 12th gen is only able to pull these impressive performance feats due to very high turbo boost clocks.
It's not that Intel is bad, it's still pretty great but in the age of M1 I wish Intel released a processor with similar power consumption.
I use my Intel i5-1240p with turbo boost disabled when on battery. Here's the geekbench with TB disabled. https://browser.geekbench.com/v5/cpu/16497299
The AMD offerings are much closer to M1 although they're not yet on TSMC 5nm so they don't quite match it. It's a historic event. Intel was #1 in laptops for an eternity. Even when the Pentium 4 was being outclassed by AMD Intel still kept the crown in mobile. This time around they first got overtaken by AMD and then by Apple and are now #3. Puts into perspective how huge of a miss their process node fail was.
Let’s remember AMD was absolute garbage for like 10 years until around 2016, when they finally bounced back. And they were getting closer and closer to bankruptcy, having to flog the silver - their fabs and even their campus. I’d say the Zen design and tsmc saved them (or Lisa Su and Keller).
Intel is in a better position financially than AMD was, and could catch up again. I tend to think it depends a lot on the organization and the people.
Jim Keller?
IC: A few people consider you 'The Father of Zen', do you think you’d scribe to that position? Or should that go to somebody else?
JK: Perhaps one of the uncles. There were a lot of really great people on Zen. There was a methodology team that was worldwide, the SoC team was partly in Austin and partly in India, the floating-point cache was done in Colorado, the core execution front end was in Austin, the Arm front end was in Sunnyvale, and we had good technical leaders. I was in daily communication for a while with Suzanne Plummer and Steve Hale, who kind of built the front end of the Zen core, and the Colorado team. It was really good people. Mike Clark's a great architect, so we had a lot of fun, and success. Success has a lot of authors - failure has one. So that was a success. Then some teams stepped up - we moved Excavator to the Boston team, where they took over finishing the design and the physical stuff, Harry Fair and his guys did a great job on that. So there were some fairly stressful organizational changes that we did, going through that. The team all came together, so I think there was a lot of camaraderie in it. So I won't claim to be the ‘father’ - I was brought in, you know, as the instigator and the chief nudge, but part architect part transformational leader. That was fun.
I don’t know the truth but that was spoken like a true humble leader. If I was forced to draw conclusions just based on that one exchange, I would agree that Jim Keller was an instrumental part of the successful design and implementation of the zen microarchitecture.
This is from an AnandTech interview with Jim Keller: https://www.anandtech.com/show/16762/an-anandtech-interview-...
The whole interview was deeply inspiring. There is a very great deal to be learned from studying it. Listen carefully, and more than once! The audio is probably better than the text, at least first time through.
I love the ARM frontend casually thrown in there, I wonder if that project is back on the table these days.
The AMD PSP is already an ARM chip in your amd64 CPU.
I feel the project was put on hold purely for business reasons, as in there's still a lot of money to be made on x86.
That and Microsoft has had an exclusivity agreement with Qualcomm for Windows on ARM. Once that agreement expires, I expect AMD to suddenly have some very interesting offerings.
Wait, floating-point cache? Does it cache FP operation results? Where can I read more about this?
Probably inner-loop op sequences.
After running benchmarks on Intel and AMD CPUs from that era with mitigations, it's clear Intel never really had a performance lead due to design. That alone rewrites the entire "state of AMD" narrative.
If Intel only had a competitive advantage during that era because they traded performance for security, was there ever really an advantage?
I personally don't think Gelsinger is going to be able to fix their culture issues. He's been more intent on pumping stock and asking for handouts than innovating, which is part of what got them where they are in the first place, for good or ill.
> they traded performance for security, was there ever really an advantage
Security is important, I would always prefer a secure product. But given design habits that elide security for performance, as well as a compromised supply chain, the only choice we have is to side with the devil(s) we know.
It's just awareness that Intel hasn't shown they have any qualms about willfully overselling capability or externalizing technical debts.
Even with longer fulfillment timelines, availability isn't necessary worth the surprise cost of competitiveness.
Lifecycling early and having to buy additional hardware to meet capacity was passed onto our customers.
>Let’s remember AMD was absolute garbage for like 10 years until around 2016
After being ahead of Intel in the Pentium 4 era. The problem back then was marketing and giant amount of (probably illegal now) bundling that was rampant back then.
That last part is important. While Intel had good fab engineers, they also relied on cutthroat business deals requiring exclusivity to get discounts and promotional support and there were constant shenanigans with things like compiler support.
AMD had to make a much better product to get manufacturers to consider it, not just being price competitive. It took the P4 train wreck to get the market to shift but Intel had enough lock-in to make it to the Core generation without losing too much market share because so many of the vendors had those contracts.
And, to be clear, even though P4 was a disaster, Intel was STILL the market leader everywhere. They responded with the Core line only after years of AMD eating their lunch with the Athlon, Athlon 64, and Athlon XP line.
Thunderbird was released in 1999 and from there to the Core 2 Duo release in 2006, AMD was the performance leader (certainly at a huge discount compared to intel offerings).
> AMD was absolute garbage for like 10 years until around 2016
Their top performance was lower than Intel in many cases, but it was certainly not "absolute garbage". For low- to mid-range they offered similar performance with usually a (much) lower price point. For normal "average person" usage they were often the better choice in terms of "bang for your buck".
The main reason we were selling Intels is because we got a rebate from Intel for every CPU we sold. Financially it didn't really matter if we sold a €400 computer, €500, or €600 computer: the profit margin for us was roughly identical (a little bit more, but not much), but with Intel you got a rebate back-channelled, so then it did matter.
Well okay. But they had the same problem Intel is in now: they pushed an inefficient power hungry chip to the limit. So it was cheaper, and performance was only somewhat lower, but it was much hotter and used a lot of power —- so pretty useless for laptops.
On laptops AMD was indeed not very good; but their desktop CPUs – far more important in the 00s and early 10s than they are today – were pretty decent. That is not to say that Intel also didn't have good CPUs at the time, but (much) more expensive and for people just doing their browsing and occasional gaming on their desktop AMD was a solid (and cheaper!) choice IMO.
Intel is in an even better position now than last month, being handed $billions ($tens of billions?) of US tax dollars to fund building new fabs, which they would have had to build anyway. That will free up money to use in undercutting chip pricing, helping to bury AMD again.
Intel should realize at this point that their existential threat comes from TSMC, not AMD. AMD is one competitor, but TSMC is giving a huge boost to all of their competitors, particularly the ARM vendors who won’t just take some x86 market share but potentially destroy the entire x86 market at some point in the future.
Can't happen too soon for me.
Intel can build ARMs and RISC-Vs as well as anybody. Harder will be adapting their salespeople to non-monopoly conditions.
IIRC intel also builds some chips with TSMC.
Including their GPUs.
I have a laptop with the AMD 6900HS.
The CPU is great. It’s power envelope and performance feel amazing.
My biggest complaint is the there is a very limited selection of Laptops with AMD’s chips, especially the top tier. The one I bought required me to replace the Wifi/Bluetooth card (it came with a barely “supported” MediaTek one) for Linux.
I have loved Linux for a couple decades. It was a difficult decision to migrate to the M1. Linux is so much more responsive and pliable. However Asus encouraged me. After owning three of their laptops in the past 18 months that began to fall apart anywhere from 3-12 months-I was done. Unfortunately most 5/6900hs laptops are not flagship quality. Typically they are lower-mid range gaming units. This is my first mac. The build quality is exceptional. I will just say I was expecting a better experience from an OS built by a trillion dollar company.
Similar position. The existence of Asahi Linux convinced me to buy my first Mac, since it increased the odds that my M1 Air would be usable (to me) long-term.
I haven't made the switch yet as my old Lenovo is still hanging on. Would you mind expanding on your gripes with MacOS, in particular the comment about responsiveness?
Yeah, I'm also interested in responsiveness details.
I have my gripes with MacOS after years of linux/windows usage, but responsiveness on M1 is not one of them.
For me it is the little things. Like switching windows. Mac feels half a second slower then Linux and that delay make it feel less responsive. Opening up the terminal can iterm2 is another one. For me these things just adds up. I want my OS to get out of my way when I want to get work done.
I have turned off the genie effect and perhaps other animations, but my Ubuntu on my Lenovo still feels faster than the Mac.
I’m using the Zephyrus G14. I wouldn’t call the build quality bad.
It’s a more traditional style than a Unibody, sure. But it’s fine.
Razer 14"? Or Dell Alienware?
Lenovo's is the only one I find with 16" 3840x2400 OLED, but just a 6850, $2800. Or with 6950 but just 1920x1200, $2635. None with both. Advantage, Radeon R6500M GPU. Doesn't say which wifi it has.
https://www.lenovo.com/us/en/search?fq=&text=21D4000HUS
Dell has a 17" 6900HX, 4k, 32G RAM, RX6850 GPU, $2800. Or 6800H and RX6700, $2550. Mediatek MT7921 wifi...
https://www.dell.com/en-us/work/shop/dell-laptops/alienware-...
But I swore never to buy another Lenovo. Or Dell. Hoping I won't have to swear off ASUS too.
Zephyrus G14. I certainly wouldn’t swear off ASUS over it. After you upgrade the Wifi card (and, optionally, the SSD), it’s a great laptop; especially for Linux.
Which one did you get? I was looking at Linux laptops with this CPU, but the line-up is very limited.
Me too. I just gave up and bought an ASUS H5600QM with a 5900HX (and 3840x2400 OLED, 32G socketed RAM, 2 NVMe sockets, and 3 physical touchpad buttons). If you act fast, you can still buy the version with Windows 10 and a GTX 3070 at $1999, instead of Windows 11 and a GTX 3060 for $254 more.
Build quality is excellent, but the first one died after 2 days: charge light wouldn't even go on. Waiting for #2. Wish me luck!
ASUS Zephyrus G14.
Its really interesting with Intel on mobile platforms during the P4 era. Their P4 mobile chips were often terrible for temps and power use, but their Centrino/Pentium M platform was excellent despite being based on the PIII. The P4's architecture became a dead end, while the Pentium M's essentially continued to be improved, scaled up, and became the Core series.
If they had tried to force the P4 mobiles instead of rethinking and retooling the PIII core with some of the P4's FSB innovations and other things, they probably wouldn't of had as competitive mobile processors and maybe wouldn't have dethroned AMD a few years later.
My first laptop had one of those P4-based CPUs. They were super terrible chips. I don't think they were really even mobile chips. I think my battery was good for about 25-30 minutes tops. And Dell put in a soft cap of 768MB of memory. I was pretty pissed that none of this was ever noted when I bought the laptop.
The AMD offerings are still very far from the M1. Try comparing the 6800U to the now 2-year old M1 in Geekbench. The M2 widens the gap even more.
The Ryzen chips are already clocking over 3 GHz, so there isn't much more scaling from power left. That's why 5nm Zen4 probably won't move the needle too much.
Compared with the Phoronix Suite, on Linux, just today: https://www.phoronix.com/review/apple-m2-linux/15.
Surprising results. Definitely not in the "very far" range.
From your link: "Even under demanding multi-threaded workloads, the M2 MacBook Air was not nearly as warm as the other laptops tested"
Keep in mind the 6850U you're comparing against has a fan too. Notebookcheck says it has a 40W turbo on the gen 2 model, and the gen 3 Intel version has a 28W sustained boost. An M2 uses around 20W.
Also, a lot of those benchmarks are showing all x86 CPUs with a 3x+ lead, which indicates software optimization problems. Geekbench 5 is optimized for both arm64 and x86 and shows the M2 ahead of the 6800U. There are also just broken results like the LC0 test where the M1 is substantially faster than the M2. Overall, your results don't seem very valid.
You moved the discussion from performance to thermal/power very fast, guess you didn't like the benchmarks
If you want to talk benchmarks, you need to exclude all the ones where there is a clear software optimization problem.
> From your link: "Even under demanding multi-threaded workloads, the M2 MacBook Air was not nearly as warm as the other laptops tested"
> Overall, your results don't seem very valid.
They're not "my" results, and thermals were not part of the discussion.
Regarding performance, if one wants to be rigorous (and that's why I didn't state "68xx is faster than Mx"), one can't make absolute claims either way, as Torvalds himself has criticized Geekbench.
I still think that describing the performance difference as "not very far" is a valid assessment.
During the pentium4 era me and my friends all got mac laptops running powerpc. Our laptops could actually go on our laps without burning us and the powerbook versions were fast enough to emulate windows in a VM without feeling slow at all. My battery lasted quite a while for the time.
That is definitely not my experience. I used VirtualPC on a bunch of different PowerPC Macs. It was slow as balls on every single one I tried it on.
I agree, but than you have corporate compliance installing an insane amount of Spyware on your device so that a 2021 XPS 13 Dell with a gen 10 Intel is out of battery when only using word within the hour.
Fans spinning all the time and no way of disabling this madness.
Nearly the same for my Mac colleagues. They maybe get 2 hours out of their M1 MBPs unplugged.
Recently discovered that Infosec has now mandated that all laptops have sleep disabled (the reason given is to be able to do endpoint patching). ¯\_(ツ)_/¯
So I'm faced with having to log out and power off every day or suck up the cost of keeping the laptop on. /r/maliciouscompliance, here I come I suppose.
Tempted to get one of these: https://us.amazon.com/Vegamax-Button-Ethernet-Internet-Netwo...
Yeah, at a previous company, the IT team were complaining about users disabling powernap or shutting down laptops as it meant they wouldn't patch overnight.
But these laptops were in bags, the last thing I wanted was a mac to start a house fire by heating up in a bag, so fuck that.
Your infosec department doesn't know how to stage patches then and should be ashamed.
They should not be named "sec". It isn't security. It is compliance.
Probably because with work at home there’s no WOL.
We disabled sleep to improve battery life and address a personal safety issue. There are many failure scenarios between various builds of Windows 10 and shitty drivers. Devices often wake up in bags and spin up fans until dead.
Normal scenario is that the battery is dead in the morning. In one case, it melted the compartment material in a bag and was deemed a safety hazard.
So now instead all devices can keep their fans on and melt compartment material bag as nobody would assume the device is that dumb to not sleep when closed. Much better.
If it's a safety hazard, tell the OEM to repair the entire fleet.
Problem solved.
Takes awhile to bring back 20k laptops!
>So I'm faced with having to log out and power off every day
When did this become a chore? I do this as standard and imo all people should. SSD Boot times these days are mostly under a minute and you get a fresh desktop every time.
EDIT: In fact I am one of the sysadmins which so many seem to despise on here. I have hybrid sleep and fast startup disabled by group policy on all my windows domain networks. It reduces so many support tickets by just actually having machines properly rebooted every night. Without this, people contact IT for glitches and bugs in their system, and when you look at the system uptime it is years!
I don't care about boot times but I do care about keeping state.
In addition to the obvious browser I typically have an IDE, a few terminal windows, email, a few communication clients (due to internal and customer choices, not mine), and usually one or two database clients open at most times. Some of those restore their state nicely while others don't. Some of the terminal windows have e.g. a particular set of terminals at certain directories because I need to keep track of and work on multiple source repos at the same time.
Starting all of those from scratch every day would indeed be a chore. Perhaps more importantly, having my desktop be the way I left it also automatically reminds me what I was doing and helps continue from where I was.
A fresh desktop every morning would be a misfeature for me and would annoy and frustrate me immensely if forced on me.
I do of course reboot now and then for system updates etc., and I don't mind that.
There might be a decent rationale for forcing a reboot or disabling sleep on non-tech office devices if the staff are technically unskilled, but this is HN, so it's not much of a surprise if people aren't keen.
Seconded. My working state is crucial and keeps me on track. For someone with ADD, needing to overcome the inertia of setting up all my apps just the way I left them every single day would be catastrophic for my productivity. I refuse to reboot more than once a month, and only then if an OS update requires it.
The year is $CURRENT_YEAR, needing to reboot a system should be regarded as a last resort and a relic of the past. No matter how much effort you dump into making apps remember their state when they're closed, it will always be strictly inferior to just dumping RAM to disk.
Because I have visual studio open, a solution loaded and organized with all the files I need for this sprint, SSMS with the servers all connected that I need, all my tabs open to see my jira board, support tickets, etc, a notepad doc i was using to take notes on a phone call, my ssh key loaded into pageant, and if my computer reboots, i'm going to forget half of that when it starts back up and lose 30+ minutes trying to get set up again the next day.
edit: i would legitimately quit a company that made me reboot every single night.
In our network there are no developers. That would be a different usecase which would require different policies. All tools required by staff are online and autosave everything. This in fact is another reason why proper restarts are enforced nightly, as most browsers with multiple tabs of heavy web apps start misbehaving very quickly. Forcing a nightly shutdown of browsers is an added bonus of proper shutdowns.
Be aware I am in a corporate environment, with all non technical users, and all services online.
Ok - that makes sense. Obviously being on the dev side of things, I've never worked for a company that didn't have devs (and admins, devops, etc). Sounds like it definitely works in your case.
> I have hybrid sleep and fast startup disabled by group policy
I guess I'll have to add this to my questions to ask at job interviews. Admins like you these days sure try to make our lives as horrible as possible. Corporate anti-virus, DNS interception and policies like these turn my machine into something I constantly want to throw out the window.
All because you are too lazy to ask a person to reboot if the machine has high uptime.
Youve taken a lot of liberties and assumptions with my post here. I dont do any of those things you mentioned, only enforce a nightly shutdown.
Nothing to do with laziness. As other people have often mentioned on HN, in the real world of understaffed underbudget corporate IT you need to do what you can to enforce things which make everybodies life easier.
It does not make everybodys lives easier, it makes your life easier by easing the support burden at the cost of making the lives of those who have to live with those policies (marginally?) harder (or more annoying). It is quite a bold claim that those of us who do not shut down our laptops every night have no reason to do so, and you know better than us that it would come with no additional cost to us to do so.
It might very well be that it is preferable to the organization as a whole to sacrifice a bit of productivity everywhere for less burden on IT. But IMHO it should not be a decision which the IT department can make in isolation.
This is the part that people get wrong about all the ITIL metrics nonsense; they’re all designed by people who don’t have a background in science or experimentation and they never account for confounding factors. For instance, companies I’ve worked for in the past actually conducted rigorous studies of improving quality of life (as opposed to “fewer tickets==good”). They discovered that the number one cause of lower ticket volumes is Shadow IT! Because of course it is.
If you are disabling things by policy, it should be after a discourse with your users and a serious attempt at training. Being a GPO dictator is an anti-pattern.
Whatever makes you sleep better at night.
Policies such as yours are tremendously user hostile, and they are a reflection of the company's culture. I would probably not quit such a company, but I would certainly go rouge by either bringing my own equipment, or reinstalling the OS. If reprimanded, then I would quit.
"everybodies life easier"
I don't think this means what you think it means. I'm not a dev, but a mechanical engineer, and having to shutdown nightly, and reopen things the next morning, would cost the company at least an hour of my work time every week.
Upthread you can see why. High drama developers will tell you that logging out will cost the company $25k a year because previous snowflake has to open notepad and disrupt their flow as they eat breakfast.
The frontline IT guys aren’t able to deal with shit like that, so a draconian policy comes top down.
I treat each laptop reboot (regardless of reason) as an unexpected crash.
If the laptop crashes more than once a week, I simply won't use it. If I worked at your company, I'd just BYOD, and keep your garbage laptop in a drawer, only booting for compliancy crap (and certainly not leaving it on long enough to download updates).
I've actually done this at a previous job or two. It was fine. Both (large behemoth) companies ended up cratering, at least partially due to chasing away competent employees with boneheaded corporate policies.
I would. If there wasn't the teeny bit of SSO that means I can't access any relevant work software on a non endpoint managed machine.
Office? Gitlab? Jira? Confluence? Any code at all? Adobe Experience Cloud? Any Google Service? Adobe Creative Cloud? Our Time and Expenses Tooling?
All locked behind SSO with EPM enforced. Additionally nearly all internal resources are only accessible via VPN. And guess what - only usable on EPM devices.
When I started I received a device, SDD encrypted me being root. After being acquired by big corp we now are in compliance world. Parts of that would have come regardless of big corp due to client requirements.
But a lot of this is quite taxing.
Because it's not just system boot time that matters. After you do that you then have to launch half a dozen to a dozen applications that all have varying startup times, that will all randomly draw focus during their startup, and some will require you to do logins during all of that.
> When did this become a chore? I do this as standard and imo all people should.
Why should I? Not long ago, my laptop had uptime of over half a year.
[Edit: Oh, "windows domain networks"? I guess perhaps that explains your propensity for rebooting?]
I still can't believe they're able to disable "Hibernate", now "Sleep" is at risk? And of course with all the bloat booting takes forever.
Irony being, one of the biggest benefits for me of the M1 power consumption is that it can run quiet and smooth even with all the corporate spyware on there. It can even run MS teams along side that and docker too which also permanently consumers 100% of one core, while still staying completely silent.
It's crazy to think you have these opposing teams of engineers, the Apple ones working away to optimise like crazy to reduce power consumption and the compliance software teams thinking up new ways to profligately use more.
Sure, but running Docker and Teams will put an M1's CPU at 70c when a similar x86 Linux box would run it at ~40c. Maybe Apple's fan curves are to blame here, but I much prefer a Linux laptop for local dev.
Why is Docker's VM not idling?
Because macOS. It's not a container, it's a full on VM.
I run Docker on M1. Definitely idling. It must be one of the containers. Try running Docker stats to find out which.
unfortunately it does it even if I kill all the containers. It's a well known issue and they have been circling around with attempted fixes, regressions etc for a long time [1]
[1] https://github.com/docker/for-mac/issues/6166
I am writing this from a M1 MBP with screen at extra full brightness (using vivid) and I can say I get way more than 2 hours from it.
How much corporate garbage on it, though?
2 hours on a M1 is beyond corporate garbage. It's like they try to use as much power as possible, perhaps more than one process is stuck in an infinite loop.
Sounds like a shit EDR that's constantly scanning every file
First tool all files every hour. Virus scan on file change. Second tool scans also but I don't yet know the frequency.
Out of curiosity, and aside from Teams which I'm already sadly familiar with, what software is it that you're talking about? Company I work for was just bought by some megacorp, but I'm still using my own 2019 13" MBP for now.
Mcafee and VMware carbon black can fully consume a computer.
Ask your IT why they're not using JAMF Protect and maybe some light reporting like splunk.
Carbon Black probably needs some adjustments, or they need to spec laptops better. That sounds pretty intense and unnecessary.
IT is 300-600 people who report to no one in particular.
With those tools running, builds take over an hour if they complete at all. With the tools disabled, builds complete in under a minute.
Not OP, but here is one... checkpoint.com https://en.wikipedia.org/wiki/Check_Point
Horrid piece of software. I don't know when this sort of office-abuse became the norm but it makes me very sour about our industry as a whole.
My company has some McAfee and Palo Alto stuff and the cyvera cloud backup software which scans every file every 4 hours but then triggers the other software because of access attempts. Some of these are suites so its 3-4 diferent things (firewall, dlp, antivirus, ...)
> Palo Alto stuff
Fuck this company, one of my wife’s jobs is BYOD but requires GlobalProtect for their VPN. After the software has been running for a few hours, even disconnected, it just starts chewing CPU cycles and grinds her M1 MacBook to a halt.
The only way to terminate it is via `defaults write` command in the terminal. It’s basically malware.
Sometimes you can work around it and just use the native VPN or TunnelBlick. Did this at a previous corporate gig where the software they used wasn't even available for macs. Gotta be lucky though. By this I mean that usually VPN software just has a default config for some VPN protocol, and if you can find out what that is, you might be able to input the same config and credentials into the TunnelBlick or network settings
Symantec, Microsoft Defender (this shit is insane), Netskope and ZScaler
Msft defender on mac!?
Our macs get admin.
I sudo turn off the garbage and I haven’t been caught yet.
I have admin on my work Mac, but I'm not sure how to turn off the garbage. It's pretty annoying and definitely cuts down on my battery life significantly. Between that and Teams being a battery hog I might only get three hours out of what should be a whole-day machine.
Note that Cinebench R23 (the main CPU benchmark we use) runs for 10 minutes, so it’s a measure of sustained performance. Boost (PL2) is typically 30 seconds or less with a long period after that of staying at the PL1 limit.
Also note that Cinebench R23 is a terrible general purpose CPU benchmark. It uses Intel Embree engine which is hand optimized for x86. It heavily favors CPUs with many slow cores even though most people will benefit from CPUs with fewer, faster cores.
Cinebench is a great benchmark if you use Cinema4D, which I asumme 99.99% of the people buying these laptops won't use. Cinema4D is a niche of a niche.
Geekbench is far more representative of what kind of performance you can expect from a CPU.
https://reddit.com/r/hardware/comments/pitid6/eli5_why_does_...
> Geekbench is far more representative of what kind of performance you can expect from a CPU.
Geekbench is an even worse benchmark than cinebench. It's extremely heavily influenced by the underlying OS rather than the hardware being compared.
Cinebench 1T and mT at least actually benchmark the hardware.
Benchmark the hardware doing 3D rendering. Which is a pretty niche use case for most people that doesn’t correlate well with more common cpu-intensive tasks like gaming or video editing.
Video editing is just as niche as 3d rendering. But really neither are actually that niche, see for example https://www.blender.org/news/blender-by-the-numbers-2020/
And to put those blender numbers in comparison https://prodesigntools.com/number-of-creative-cloud-subscrib...
> It's extremely heavily influenced by the underlying OS rather than the hardware being compared.
How so?
To clarify, Cinebench correlates poorly with gaming, office software, web browsing, and video editing. Those are what the vast majority of people buying laptops will use it for.
For people that code, it also correlates poorly with parallel code compilation.
Anandtech benchmark puts Intel 12900K on top for both RISCV toolchain compilation and Cinebench.
12900K is the fastest x86 DIY CPU available. It'll be tops in nearly all benchmarks and applications.
You can say the same about gaming. It's tops in Cinebench and most games. Therefore Cinebench and gaming must have a high correlation? No.
Thanks for your input, this isn't said enough. So many CPU benchmarks aren't effective at evaluating the general use case and yet are held up as this golden standard
It's because Cinebench was disproportionately favoring the Zen architecture so AMD pushed it in marketing.
I agree that 12th gen can sustain good performance given decent cooling. However, the issue is still heat.
It could be fun to have a `lap benchmark`. That is can you keep a laptop on your lap while it runs Cinebench R23 for 10 minutes? With TB disabled, I can.
The wild thing is, your benchmarks with turbo boost disabled (GB5 single: 617 / multi: 5163) almost identically match the benchmarks on levono's new ARM laptop[1], the Thinkpad x13s (GB5 single: 1118 / multi: 5776). True, Windows on ARM will likely keep on being a pain for some time, but linux support seems to be coming[2], and for those of us that target Linux ARM development anyways, this is one of the first performant alternatives to post-arm Macbooks. Plus it has the killer feature: no fan, long lived battery life.
Another interesting ARM system is Nvidia's Jetson AGX Orin DevKit, which clocks in at (GB5: single: 763 / multi: 7193) [3]. That system is linux native right now, but of course isn't a laptop form factor.
[1]: https://browser.geekbench.com/v5/cpu/15575694 [2]: https://www.phoronix.com/news/Linux-5.20-SoCs-8cx-Gen3-Arm [3]: https://www.youtube.com/watch?v=KrZt90c_-C8 (sorry for the video link)
If I'm not misreading what you wrote about the x13s, isn't it 2x the perf of the turbo-boost disabled i5-1240P in single core? That's a lot!
EDIT: Also confirmed that WSL2 runs natively on Win11 ARM. So you can have an optimized full Linux kernel running on the Thinkpad as well.
I mean, yeah, laptop CPUs will some times go like 3x slower without Turbo Boost. As others wrote, you can tune for battery life and keep it on.
> However, Intel 12th gen is only able to pull these impressive performance feats due to very high turbo boost clocks.
Don't forget 12th gen Intel CPUs have a lot more cores than 11th gen Intel CPUs. That's where the significant benchmark improvements are coming from. Unfortunately though, these often don't reflect the real world use case. Having more cores isn't going to improve the day-to-day performance very much. The new efficiency cores probably improve battery life though
Maybe the new efficiency cores improve battery life over the 11th gen Intel CPUs but they're still far from the battery life of AMD or Apple silicon CPUs
For sure.
I believe Intel 12th gen is not properly tweaked yet, currently the benchmarks show that the battery performance is worse with 12th gen than 12th gen. But if you 'limit' the chip and force the usage of the efficiency cores then you should be able to beat 11th gen.
Good to know ty!
You can usually use the "Xtreme Tuning Utility" (by Intel) to scale down without disabling turbo boost. I use it to scale my ~4.8 Ghz cores to 3.2 Ghz, so they can run without fans on my small form factor PC. 3.2 Ghz x 8 is enough to run most my favorite games.
Without TB, I'm down to 8x 2.4, which is a bit sluggish.
You may get better results by leaving the clock speed settings alone and adjusting the long-term power limits to an acceptable level. That way you won't sacrifice peak performance (ie. latency, especially with only one or two cores in use) but will still get most of the power savings.
I'll echo the concern about the turboboost really helping at times. Often, you have a single-threaded process running that benefits from a single core running at a higher clock rate for a short period of time and improves the experience tremendously.
How is 12th gen compared to 10th gen, in your experience? My XPS 13 on a 10th gen i7 just becomes absolute syrup on battery, even if I put it in high performance mode.
I have an XPS 17 with 12th gen and it drains around 50% battery per hour on a zoom call.
edit: My laptop has a i7-12700H, which uses more power than the cpu in the framework laptop.
More thoughts: The laptop is pretty much always plugged in. When I'm out I carry a little 60W charger with me.
However for other reasons I wouldn't recommend the XPS 17. I have a Lenovo legion 7 at home that I'm more happy with (cheaper, better performance, lots of ports, etc)
I had similar numbers on my i9 MBP.
Last week, on my M1 MBP, I spent an hour in a Teams (Electron!) call while off power and battery went from 99% to 96%.
Intel needs to match that to even be in the game for portables IMO.
The M1/M2 laptops are so above and beyond anything from AMD and Intel that it’s almost a joke. If Apple was really smart and had enough supply, it could drop the price down a couple of hundred dollars and completely take over the laptop market.
Price isn't the thing that keeps most people I know away from Macbooks (after all, it's the employers/business expense). I'm sure many people would like an M1 machine for the efficiency but the software is simply not ideal.
If they invested in contributing Linux drivers they'd probably be able to takeover the market for developers. Asahi is slowly improving, but it doesn't feel ready yet to be someone's sole daily machine.
Ultimately my M1 MBP sits on the shelf collecting dust, except for occasional testing. My smaller and lighter Intel laptop already gets 8 hours of battery life which is more than enough for me, and has perfect Linux support.
If they added some kind of Bootcamp for Developers, I'd switch to a MBP tomorrow.
And I just bought a maxed out XPS 13 Plus, which I generally like.
I suppose you mostly know ppl who run linux as their primary os then? Certainly in web dev all macs have been extremely popular.
As someone who uses Linux daily and generally dislikes macOS, I got all my Linux tools up and running on macOS, Docker runs well and many Linux things I run on an external cluster. So I am content.
The macOS UI is bit annoying and the fact that you have to install tiny apps like to have a separate trackpad and mouse scrolling direction is not ideal, but yeah it's all dwarfed by the fact I finally can't hear any fans and performance is just as good as plugged in. I can't remember when I last time had a laptop on a sofa without having to worry about the charger and getting laps uncomfortably hot.
You don’t need separate apps to set scrolling directions for the trackpad and external nice. It’s all in System Preferences.
They're separate tickboxes yes, but if you tick one it ticks the other.
At least it did when I still used macOS on portables. Now I only have a mini left for work
I just checked on my work Macbook, this is still correct and utterly stupid behaviour. 'Natural scroll direction' is global for all pointer devices. Internal, external trackpad and mouse.
The toggle bundles trackpad and scrolling setting in one - I want to have their "natural scroll direction" on trackpad and reverse on mice.
Scroll reverser allows you to set them up separately.
I've never used Linux daily, always macOS or osx, but all the same tooling probably day to day and the M series will be a no-brainer upgrade when it comes time. I would like some more colours that aren't super fingerprinty in the pro line though. Pretty bored of silver and grey
> The M1/M2 laptops are so above and beyond anything from AMD and Intel that it’s almost a joke.
This may have been true when the M1 was just released, but the gap both in performance and power consumption is shrinking. And that's with the competition still being one process node behind.
I assume the M1/M2 performance versions are much more expensive (very large L1/L2 caches, see Ryzen vs Ryzen X3D performance, and that only increases cheap L3 cache) than AMD processors. Apple can afford this I assume with high end laptop and their margins from owning the supply chain. Apple has the benefit of not telling and you can't buy an M1 (?). I would be interested about production costs of Ryzen vs. M1/M2 processors, any sources?
Also because I assume ARM needs less transistors compared to x86 so they can use more die area for caches.
If you look at die shots of any new chip, x86 or ARM, the actual CPU cores are taking up a smaller and smaller percentage of the die.
The rumored 15" MacBook Air may just do that. The base M2 chip is plenty fast and with the larger case they could put an even larger battery in and push battery life to 30+ hours.
It's already way cheaper than comparable windows machines
I recently turned down a m1 mba for $1100 in favour of a 2015 mba for $500. The keyboard is so dramatically better in the 2015 model, I couldn’t believe it. The battery is great, seems to last about 10h in a charge while writing and browsing and using discord.
I don’t care how fast it is as long as it continues to have those mediocre low travel keyboards. They’re better than their worst keyboards from the 2018 era but they still aren’t good
Why not just buy a keyboard (that's better than any laptop keyboard)? There are many benefits.
I always consider the built-in keyboard as a sort of emergency/convenience tool.
I’d rather have a portable device that doesn’t need a separate keyboard
I also have the XPS 17 with i7-12700H and 4K screen. I'm able to get 8-10 hours of work (compiling Rust, several docker containers on the run, at least a dozens tabs, etc...).
I'm on ArchLinux with a window manager and no desktop whatsoever. I also disabled the GPU. I wonder if there is something draining your battery on the background.
Thanks for the real-world numbers! This is good for my emotional state, as I've switched from Windows to a MB Pro solely because of the battery life and performance of the M1 Pro chip.
Zoom battery usage per hour should be a standard metric. Nothing kills a battery faster.
the one with a 17" 4K HDR touchscreen and RTX 3000 series? they will deplete the battery far quicker than the processor
We all wish intel could release a processor with similar power consumption to the m1, but it's not like they just overlooked battery life. x86 is just fundamentally not competitive with arm on that front, unfortunately. The only advantage it seems to have from an architecture standpoint is raw performance
They have the "intel atom" brand for low-power CPUs
As do I.
Meanwhile in a server, I'm starting to worry that as I work to flush performance issues out of the system, especially latency problems, that I won't see the sort of perf numbers I'm expecting because the fact that the machines are running well below 50% CPU utilization are likely to dogleg the moment it hits the thermal limits, and I have no way to predict that, short of maybe collecting CPU temperature telemetry. Not only might my changes not translate into 1:1 or even 1.2:1 gains, they might be zero gains and or just raise the CPU utilization.
Magical CPU utilization also takes away one of the useful metrics for autoscaling servers. Thermal throttling opens the door to outsized fishtailing scenarios.
Tested an i7 1260p recently and it ran like a beast. What's interesting is they have a discrete GPU now which is the Intel Arc that should take a lot of the load off the CPU.
What's wrong with fan noise? throw on headphones. If anything I wish my fans would go louder and harder. Nothing I hate more than a throttling laptop.
Same. I prefer that the manufacturer dictates to me how I use my end user Apple device as long as it has marginally better battery life.
intel's comet lake or whatever is pretty comparable to M1 imo
Ouch
The coolest thing isn't just the cpu upgrade, but that the whole mainboard is something like a standard. Other laptops have weird shaped boards just for that one model.
If this 'standard' takes off it could start an entire ecosystem of modular hardware. That's exciting. I'm hoping for a 'Framework surface', a 'Framework tv', a 'Framework 400', etc.
An even cooler thing is that since the mechanicals are all published, anyone can actually make their own relatively easily, like this Framework Tablet: https://www.reddit.com/r/framework/comments/wgzwv1/two_big_d...
* https://github.com/whatthefilament/Framework-Tablet
* https://www.instructables.com/Framework-Tablet-Assembly-Manu...
Let me just get the tooling set up on the dining room table, I am sure my spouse won’t mind …
You're not allowed to have hobbies or tinker around? :(
I think it's more of a "people are getting tired of internet DIY projects that require thousands of dollars of tooling and existing materials."
You're in a startup-biased forum. If you don't like it as a hobby, how about thinking how the openness is good for business? Your buddy can start a company making Framework-compatible tablets, or a mini server rack, or Framework-compatible keyboards (ortholinear layout please!), etc.
I'd pay very very good money for a Framework router or a Framework TV...
Imagine hardware that you control, that you can upgrade piecemeal when something gets out of date. Besides the eco friendliness, it just sounds... nice?
I went from saying, "if only they had an AMD" to getting a different machine in ARM and now im saying "if only they had ARM".
I never realized how much heat, vibration, air blowers negatively affected me before. A framework laptop with some type of ARM or ARM-like cpu could do a lot with the space savings on cooling.
I've played with an AMD 6800u laptop and I would say it's the ideal x86 laptop chip right now. Normal usage at the 12w mode or light gaming at 20w mode was super impressive. Even though 12th Gen Intel chips have made great performance gains Intel is still relying on unsustainable high turbo boosts to get the benchmark numbers.
I just recently stumbled across these two otherwise identical laptops.
https://psref.lenovo.com/syspool/Sys/PDF/Lenovo/Lenovo_Slim_...
https://psref.lenovo.com/syspool/Sys/PDF/Lenovo/Lenovo_Slim_...
The AMD version is rated for 20% greater battery life just from the CPU difference.
Exactly. Not just 1 model but you can compare identical ThinkPad models as well and AMD versions have significantly better battery life. Not only that, even last year's identical models with 11th Gen Intel CPUs have better battery life than their new 12th Gen models. So Intel's power efficiency actually decreased compared to last year, even though they introduced the hybrid architecture.
In some instances Alder Lake CPUs slightly edge out Apple's CPUs in performance, but with a terrible battery life (~4 hours of real world battery life vs >10 hours on Apple laptops) and noisy fans.
AMD on the other hand seems to have focused on getting a good balance this year. 6800U models have significant performance improvements over the last gen while improving the battery life as well.
I am thinking of buying a new laptop this year, but I am waiting for the AMD models. For anyone interested, these are the models I am waiting for:
https://psref.lenovo.com/Product/ThinkBook/ThinkBook_13s_G4_...
https://psref.lenovo.com/Product/ThinkPad/ThinkPad_X13_Gen_3...
Both have much better battery life than their Intel counterparts:
ThinkBook 13s Gen 4 (AMD): 14 hours
ThinkBook 13s Gen 4 (Intel): 10.9 hours
ThinkPaf X13 Gen 3 (AMD): 15.68 hours
ThinkPad X13 Gen 3 (Intel): 11.8 hours
These numbers are usually lower in real life, but they are comparable with each other.
These numbers are usually lower in real life
There are data sheets and then there is real life. People were raving about AMD laptops before. I got a ThinkPad T14 AMD with a 4750U. The stated psref battery life is MobileMark 2014: 14 hr, MobileMark 2018: 11.5 hr. In practice that was more like 7 hours on Windows (with a mild workload) and 3 hours on Linux. The CPU performance was merely ok and I could get the fans blowing quite quickly.
Life with the laptop was quite miserable. There were many other paper cuts and despite being Linux-certified, I used Windows with WSL most of the time since basic things like suspend would at least work relatively reliably.
I sold the T14 after 7 months (thank deity that there was a chip shortage, so I could recoup quite a lot of the 1400 Euro I paid for the T14), got a MacBook Air M1 and didn't look back. It's a world of difference.
That does not match my own experience with a very similar t14s equipped with same CPU. I get much better battery life than 3 hours on Linux even with moderate workload (compiling, browsing, etc.). I often use it on the go and it’s been pretty great.
Lenovo ThinkPad T14 AMD Gen 1 here. My experience is totally different, using Fedora 36, everything works! Battery life I have no exact numbers, but definitely way more than 3 hours.
Within a year the kernel should support proper Power management on these new AMD systems. I have a year old 5800H that went from 3 hours to 11 hours battery life in typical use, just due to kernel version updates. It's like a new system.
It is such a shame that certain major distros don't ship latest kernels. I understand also shipping LTS kernels for customers/users that want the stability, but the default for desktops and install media should be a really recent kernel.
I think it’s pretty common to install the HWE kernel on Ubuntu.
This. The newer kernels support amd_pstate and it's a big difference.
I'm also on a fully-specced T14 Gen 1 AMD with the same CPU (4750u) and I get a full day's use out of it under Fedora 36. And CPU performance is way more than OK for my use-cases. I'm curious what you were doing with it.
There is a big improvement from the 4000 series to the 6000 series.
Something was screwed up, very likely on Windows, definitely on Linux. My 5800H gets 9 hours of battery life on Linux with KDE (full glass blur effect, IntelliJ, Firefox, etc...), without any undervolting or underclocking, at ~80% battery, 1440p60 15.6 inch screen. And about a third of the power consumption comes from the desktop class SSD I put into it.
AMD 6800u also has insane graphics compared to the Intel. You can actually play triple A games at acceptable framerates.
Since this is Hacker News and a lot of people here runs Linux, I want to remind everybody to hold your horses before you tested Linux on these machines.
The machine that I'm currently using comes with a Ryzen 6800H CPU and LPDDR5-6400 RAMs, made by Lenovo. On Linux, the builtin keyboard don't work because of IRQ problems (See [1], a fix is also available at [2]), and it constantly spams out "mce: [Hardware Error]: Machine check events logged" messages.
[1]: https://bbs.archlinux.org/viewtopic.php?id=277260
[2]: https://lore.kernel.org/all/20220712020058.90374-1-gch981213...
If you read the code in [2], the patch basically disables IRQ override for every new Ryzen machines (`boot_cpu_has(X86_FEATURE_ZEN)`). Based on that, I assume every new Ryzen CPU has the same problem(???)
Edit: wait... blaming it on the CPU might be unjust, it's more like a kernel adoption problem.
Luckily, compiling Linux kernel with 16 threads is not really too big of a struggle, you can just apply the patch manually every time the kernel updates :-) :-| :-(
> On Linux, the builtin keyboard don't work because of IRQ problems (See [1]
The exact problem is that an IRQ/ACPI workaround is not needed anymore on modern Zen platforms, and it (the workaround) now breaks compatibility. It's already in linux master, and will be released for v6.0
Commit message of the fix:
These ACPI tables are commonly broken. Who creates and tests them?
Some vendors may only 'test' as far as getting the compiler to produce something without errors. So, first thing an end-user/consumer/enterprise customer can do when encountering a new platform is to run the platform through https://uefi.org/testtools, in particular the FWTS/Firmware Test Suite and SCT/ACPI Self-Certification Test, and hold the vendor accountable. Chances are the vendor already has a member on the board of the Unified Extensible Firmware Interface Forum.
The vendors, against Windows.
Here's a user report from back in April from a user running w/ 5.17.x on a 2022 Asus Zephyrus G14 w/ an 6800HS w/o keyboard issues so I don't think it affects all laptops running Ryzen 6000. https://www.reddit.com/r/linuxhardware/comments/u5p1rs/zephy...
My AMD Gen 1 has been nothing but trouble under Linux.
But it's not CPU, it's Lenovo. Lenovo has terrible firmware team.
I'm on T14 Gen 1 AMD with Fedora with the latest BIOS for the past 2 years and hasn't really experienced any issues. Perhaps I'm missing something here. Care to elaborate?
Battery life is worse than in my 10 year old laptop. Battery life in suspend is exceptionally bad. I haven't ever used anything THAT bad.
You can read more about this in my blog
https://127001.me/post/ten-years-of-thinkpadding/#_the_ugly_...
Thanks. The posts says all AMD-based ThinkPads are affected. And I can't confirm that with both of my T14 Gen 1s which I leave on S3 for multiple days. I haven't tried the S0-something yet, but will do. It looks like you did way more research than me, so I believe that it's a real issue. Perhaps it's down to individual component combination, like planar revision, battery, etc.?
My experience is otherwise, see my comment above.
If you don't mind, can you test Ubuntu or Fedora to see if the issue is present?
The reason I'm asking, according to a Lenovo talk from DebConf22 [0], the guy said it takes some time for patches to reach upstream, but with Fedora and Ubuntu they have some connections to shortcut some patches before they reach upstream.
[0] https://www.phoronix.com/news/Lenovo-Linux-2022-State
I've already setup production environment on this machine, it's not really easy to test Ubuntu on it. Fedora on the other hand, is the exact distro I'm using.
Sad news is, according to my test, the latest kernel Fedora 36 offered, which is kernel-5.18.16-200.fc36.x86_64, does not come with the keyboard (IRQ override) fix.
Another thing is the mce errors, or more specifically, errors similar to:
It was never fixed by both patched and unpatched kernel.Of course, those were just the most annoying two. There are many smaller problems such as: 1) Screen won't turn off after timeout if an external display is plugged in via HDMI, 2) Linux S2 (Suspend-to-RAM) never wakes up, 3) Builtin mic don't work, 4) Fingerprint scanner crashes when you put your finger on.
I guess it takes time for those engineers in Lenovo to address those problems, and Lenovo is not a Linux-friendly company (they are more like a Linux-meh company).
Framework on the other hand, cares about Linux more. Sadly they don't operate in my country :(
I've been running Linux on a Lenovo with a 5800h for almost an year with zero issues.
I did have plenty of freezes on my old 2500u machine though. Was very disappointed. Thankfully, those days are long gone
Note that this is more of a motherboard issue than a CPU problem. The fix is basically to disable a previous fix for some other buggy hardware.
Plenty of people have new Ryzen laptops that work just fine.
I have been using a 6800U on Linux for the past two weeks. With kernel 5.18 almost everything is supported out of the box. The few issues I have:
- Kernel logs show that the GPU driver crashes from time to time. When it happens the screen freezes for a few seconds but recovers.
- HVEC hardware decoder gives green artifacts on VLC using VA-API.
- It seems not to support advanced power management, on battery the CPU will happily boost to 4.7 GHz which in my opinion makes little sense. AMD has been steadily pushing work related to power management so I expect it to improve. As it stands total power consumption on battery averages 5.1 W.
AMD does automatically switch to a "power saving" profile on battery that lowers power consumption significantly vs on charger, but if you want, you can tweak even further with a fantastic tool: https://github.com/FlyGoat/RyzenAdj - it's a simple CLI util so easy to set up w/ udev rules to run automatically. On my old Ryzen laptop, I have it set to run `ryzenadj -f 48` when I go on battery, which limits the temperature below my laptop's fan hysteresis for a very efficient and totally silent system, and `ryzenadj -f 95` to set it back, but you can also adjust most power and clock parameters (not everything works on Ryzen 6000 yet, but the basics should). The docs/wiki are also great and has lots of charts illustrating what each setting does: https://github.com/FlyGoat/RyzenAdj/wiki/Renoir-Tuning-Guide
The other tool you could check out (and use in conjunction or instead of ryzenadj) is a tool called auto-cpufreq: https://github.com/AdnanHodzic/auto-cpufreq - it lets you switch governor and turbo/clock speeds automatically base on battery if you're just looking for a simpler way to set that.
5.19 has a bunch of amdgpu fixes that might fix some things https://lists.freedesktop.org/archives/dri-devel/2022-April/... but sadly there are often also sometimes regressions/weird bugs: https://gitlab.freedesktop.org/drm/amd/-/issues?page=90&sort...
Is it possible somehow to set separate CPU/GPU power limits like is possible on the steam deck?
Can I ask which laptop that is?
Sure, HP 845 G9.
At least on most ThinkPads, the AMD versions lack Thunderbolt 3/4/ USB 4.0 Full Feature (e.g. including TB) so if I have a Thunderbolt docking station, it will probably not work. Nobody confirmed to me, what would work in such a setting. Will Lenovo/ AMD provide USB 4 upgrade using firmware?
Of course, the next question is the potentially asymmetric RAM on the ThinkPads with one channel soldered and one SO-DIMM slot, e.g. the 48 GB version. Will it use dual-channel until ~32 GB and then single-channel? How will the GPU perform, e.g. will it use only one slot in such a configuration. So many questions...
The newer AMD 6xxx series supports Thunderbolt 3 and USB 4 (USB 4 spec includes TB3), which has quite a few compatible docking stations available
Formally yes, but it isn't listed e.g. in the ThinkPad T14 Gen3 (AMD) specs as supported, they are all USB 3.2 only. No TB, no USB 4. TB 3 seems to be optional in the USB 4 spec, as it would need to support 40 Gbps speeds and not "only" 20 Gbps.
The T14s Gen 3 has USB 4 support listed. Lenovo is all over the place with which AMD models support it.
The 6800U models on the market just show that the integration by the OEM is far more important than the choice of CPU. The Asus Zenbook S 13 manages to hit 60° on its bottom case under sustained load, which is an impractical temperature. The Lenovo Yoga 9i which on paper uses a hotter Intel CPU runs cooler, quieter, and longer while also beating the Zenbook on benchmarks. So you really have to look at the entire package.
I don't doubt those benefits are real, but I recently bought a Samsung Galaxy Book with Intel 12th gen and integrated graphics, and it is actually fantastic. Super thin, lightweight, silent, and low heat, while still boasting 12 extremely fast cores. And it was very affordable. It is very weird for me to buy a laptop where I have almost no complaints... and Samsung did a splendid job with all the fit and finish. I'm sure a lot of it is in fact thanks to the non-CPU upgrades like DDR5, since CPU is rarely the bottleneck, but it is all the same to me.
This to me sounds like the experience people were praising the M1 chips for, and tbh Intel actually delivered for once. I say this as someone who had been solidly in the AMD camp for a long time. Thanks to competition, Intel now has to be good. Weird!
Gen 12 machine with low heat is an exception rather than the norm sadly. It must be possible, since there are a few machine out there that manage to do it, but most machines right now are quite hot and has low battery life (compared to 11th gen), especially the P-series in thin-and-light form factor (I am writing this comment from Lenovo X1 Carbon Gen 10 with 1280P, which can't even match the Cinebench score of the frame.work)
How is the battery life? M1 laptops can easily last the whole day without plugging for most users.
I am able to get 4 or 5 hours of intensive programming done on battery power... running multiple IDEs, Android emulators, browsers, etc. I've never tried it with a normal usage pattern because I only use battery power for working.
I can run 3 copies of eve online in full brightness and full quality graphics for 7 hours on the M1.
That’s pretty bad compared to m1. Less than half the battery life.
Also Samsung laptops don’t last in my experience. They make short term tradeoffs at the expense of long term reliability.
Maybe, but 4-5 hours is solidly the point at which it doesn't matter at all to me.
I haven't had trouble finding an outlet for years, even at airports, planes, coffeeshops, etc.
I used to think this as well, but the all day battery life on M1 really made me realize how much of a mental tether outlets used to be. It seems pretty minor, but not worrying about getting a seat at the {coffee shop/airport/lounge} next to an outlet makes working on the go a much more attractive option now.
Don't power banks achieve the same and more? They are a bit heavy,but it sounds like it should be worth it to you.
I guess so, but that's another thing you need to remember to charge. I feel like that shifts the mental tether from, "I hope there's an outlet", to, "I hope my power bank has enough juice".
I travel a lot. That’s going to be increasing by the end of the year as my wife and I live the digital nomad life flying across the US. The freedom of not having to worry about your battery life for a day of real world use is amazing. I’ve only had that over the years with an iPad before ARM Macs.
Not to mention your laptop not getting hot enough to boil eggs or sounding like a 747 when you open too ,any Chrome tabs.
The freedom of running Docker on a laptop without heating it to 60c is too tempting, unfortunately. My Macbook will get 2-3 hours of battery life over my Thinkpad, but my Thinkpad also doesn't burn my palms with regular use. It's a game of tradeoffs, but I rarely find myself reaching for the Macbook if I've got a desktop close by.
Maybe that's a skewed perspective though. I have yet to try the Pro/Max chips seriously (or the mobile 12th gen chips, incidentally), but I don't really find myself pining for them either. If my laptop is too limiting, I hop on Tailscale and dev on my desktop.
Man, the heating on my MacBook is insane. I have a Thinkpad with on board Nvidia card, beefy CPU, etc, and it can play gfx intense games at 4k and still sit in my lap comfortably.
Meanwhile my MacBook from work gets risking-burns-hot from just screensharing on a Zoom call.
Is this an arm MacBook? It sounds a lot like an Intel one.
Unfortunately, all Macbooks will get pretty hot running Docker, Apple's fan curves will always let your machine heat up before being audible.
I am not sure how accurate this is - I have run my Macbook M1 in clamshell mode for the past year and have almost never heard the fans, and never even felt heat coming off of it, despite having Docker / JetBrains full IDEs open
Running a local dev environment tends to hit ~75c for me on Apple Silicon, I think across 4 or 5 containers. I've also this same environment on my Thinkpad T460s (with a paltry i7 6600u), which settles out around 45-50c.
What type of battery life are you getting on an Mx MacBook?
I'm getting about 10 hours on my M1 16" MBP running a JetBrains IDE and 8 Docker containers.
FWIW I get about the same battery life on Linux with an AMD laptop. It's not as power efficient under load but not having to run a VM for docker helps a lot.
When depending on solar it’s pretty important to be low power, m1 drawing 10-20 watts vs something else drawing 60-80 can make a big difference on batteries
The reality distortion field of apple fans is really getting out of hand. Portable laptops (i.e. not the desktop substitude game machines) have not been using 60W for years. In fact my X1 from 2016 typically shows between 6 and 15W in powertop depending on load and has been lasting >8h on linux (it's less now as the battery has aged a bit).
I used to think like this. But after using an M1 Macbook Pro, I've changed my opinion. This is true all-day battery. You never have to think about battery anymore. It makes a big difference to my workflow, even though I'm literally 24 hours at home, never more than a few feet away from a wall socket.
I'm willing to put up with all the downsides of this laptop and the OS, just because of the battery life.
I just got an M1 Pro 14” 10c 32GB RAM and while the battery life is great compared to anything else I’ve used, it’s not as amazing as I would have hoped.
I get around 6 hours from 100% with a light Webpack dev workflow + Docker for Mac running in the background (arm64 postgres and redis containers).
Is that normal? powermetrics shows package power bouncing around 500mW - 10 watts when compiling, which would suggest I should be getting a lot more battery life. Is there a utility to reliably check power consumption of the whole system? Stats.app seems to think my system idles around 5 watts and goes to 40 watts when compiling which is much higher than the package power and my screen is only 1/3 brightness.
> That’s pretty bad compared to m1. Less than half the battery life.
Not really. My M1, used for email, browsing, some xterms, some emacs, 3 or 4 video meetings and the Goland IDE, only gets about 5 hours.
When new it was a little longer than that.
That is actually terrible. I put a new battery into my 2012 macbook and I get about that much battery life with similar usage (although my compute is done on a server).
> That is actually terrible. I put a new battery into my 2012 macbook and I get about that much battery life with similar usage (although my compute is done on a server).
My compute[1] is done locally only. My personal laptop is an HP Victus, it gets around 7 hours because I don't compute locally, which makes the M1 look okay in comparison.
I always wonder about these people who claim the M1 can last a full day without needing to be charged - they can't be developing on the thing, can they? If they're not doing computations on the thing, then the battery life is quite similar to all other non-gaming laptops in that price range.
[1] If by 'compute' you mean compiling code, running it, debugging it, etc
The P-series CPU this article is advertising is not the one you want if you are concerned about battery life. The P CPUs consume as much energy as is practical, in their quest to go faster. These CPUs use 28-64W when active, while their slower U-series counterparts use 15-55W. There was a Dell laptop on Notebookcheck yesterday that had a measured 13-hour battery life and a U-series CPU.
ETA: Comparable laptops reviewed by that site get 8hr runtime from a 71Wh battery and the P CPU, or 13 hours from a 60Wh battery and the U CPU.
> I never realized how much heat, vibration, air blowers negatively affected me before.
It's like a constant drilling sound that gets into your brain and makes thoughts flow like through the mud in the full sunlight.
For that reason, despite that I don't like Apple and I am not a fan of macOS, I decided to buy M1 MBP.
This is a lifechanging experience, that laptop. You can even put it on low power mode and won't lose much perfomance and it is absolutely silent. Even if you run a prolonged heavy task and fans kick in, you will barely hear them (ha hope it won't get worse over time and that there will be a way to disconnect them if that is going to be a problem).
Oh and no coil whine. Even if my XPS 15 miraculously don't have fans on, you can be sure their noise is inconveniently replaced by distinct coil whine that is just as annoying as fans.
I am probably going to buy another macbook, this time the Air one just for leisure and learning.
I'm honestly impressed that so many people in these threads work in environments quiet enough where the fan is the largest source of annoyance. If I'm inside its drowned out by my window AC unit. If I'm outside on the patio its drowned out by my neighbor's window AC unit. If I take it to work then its the hvac vent over my desk that's the dominant source of noise, followed by the construction site across the street. Usually I use third party tools to pin my fan at max speed when I'm doing anything because it keeps the keyboard cooler.
My XPS didn't blow. My MacBook Pros (Intel) always did. As others have shown heat management in Intel MacBook Pros were abysmal and not comparable to other Intel Laptops. Yes, the M1 is a huge power efficiency jump forward, would love affordable ARM desktop machines, but a lot of the MacBook Pro (Intel) problems are from very bad heat management.
[Edit] My understanding is, that a lot of power efficiency and performance comes from the large cache - not only ARM - which is more efficient than main memory and faster - also see AMD X3D. (per core AFAIK) Ryzen 7 L1/L2: 32/512, M1 L1/L2 192/3072
Is there any other ARM processors in Apple's M1 class of performance? I haven't kept up with progress on the CPU front.
Really the only ones in my opinion who have a chance is nvidia since they have the experience and know how to deliver chiplets on cutting edge processes. They’ve been working on arm cores in the past. It might be several years but I’d be surprised if they don’t try once Qualcomm’s exclusive deal with windows ends
Unfortunately, not even close.
unfortunately? as in no competition to drive progress?
Not too many companies have a market cap over a trillion dollars, and if you don't then you can forget about competing on the upcoming process nodes that will require enormous capital.
Samsung's internal processor division seems to be sucking. Google is trying it half-heartedly like they do with nearly everything. Nvidia and AMD haven't really tried to compete for the desktop ARM market, and Intel certainly hasn't. Qualcomm has rolled their mobile chips for laptops, but they remain mobile chips in a larger form factor. So Apple is in a class of its own for desktop-grade ARM CPUs.
I hope they move into workstation. Something like a quad M2 MAX would probably sell like there is no tomorrow.
Hopefully Qualcomm’s Nuvia acquisition will bear some fruit sometime soon.
It’s not drive. Just investment. There is no demand for high performance desktop/laptop ARM CPUs outside Apple.
Chromebooks are mostly low price, and may lack the software to truly push the chips often (without dropping into dev mode).
Windows on ARM seems to be a near failure. Very little native software and x86(-64) emulation is far slower than Rosetta. May not matter, they seem to be low cost machines anyway.
Linux just isn’t a big enough consumer market to drive it.
Server seems to be the big front for ARM, but that’s a very different chip. So I’m under the impression ARM laptops (outside Apple) are just phone chips, which don’t compete on the same level.
> Windows on ARM seems to be a near failure. Very little native software and x86(-64) emulation is far slower than Rosetta.
I'm very happy with my SP X running insider Windows 11 and WSL2. You couldn't pry it from my dead fingers for travel use right now.
absolutely the same
Or if you want to use ARM but don't want to be tied to the Apple ecosystem.
Or if you want to use Apple but don’t want to be tied to the Arm ecosystem.
Surface Pro X works great, i can attest.
I have absolutely no data or information to back this up, but I predict a RISC-V based Framework within the next 5 years. And I'll be one of the first customers to buy one.
First gen unlikely to be low(er) power though surely? I can see this emerging, but power would have to lie in the space of "premature optimisation" risks. For intel, AMD & ARM they're all multiple generations in, have understood their exposure of risk to parasitic effects in the VLSI of choice, have fab at viable defect rate. RISC-V is more played with in FPGA or emulation than otherwise. (happy to be corrected, but I think this remains true)
Why wait?
https://arstechnica.com/gadgets/2022/07/first-risc-v-laptop-...
That's super exciting, but I'm not familiar with Xcalibyte. are they a good and reputable maker?
Is frame.work a good and reputable maker?
Not saying they won't turn out to be, but they are still young considering the space they are in.
Similar holds for any other hw maker that is only starting out!
Yes I think frame.work is a good and reputable maker. They are indeed still young, but they've made and kept a number of big early promises, and have been consistently shipping. There are some bugs/quality concerns but I'm confident they'll work them out.
I've bought hardware before though from companies I'd never heard of, and either never received it or when I did it was much poorer quality than advertised or was so long after the order that I'd nearly forgotten about it. Would hate for that to happen with xcalibyte
Or any maker that starts using RISC
Apples M1 only worked because everything is in one chip. Very different from the goals of framework. There aren't any vendors offering M1 competitors either.
> Apples M1 only worked because everything is in one chip.
That’s not true in any way?
Could you elaborate why it isn't the case?
Because there’s no evidence for the claim, and the evidence is mostly “Apple has created a huge and complex chip”?
And what does “Everything is in one chip” even mean? Because the memory certainly isn’t, it’s soldered on the package but it’s not part of the package, it just doesn’t take additional room on the main board. And there are a bunch of other chips on the mainboard
Finally, it’s pretty much just following mobile / phone chip SoC design, so any other manufacturer could do the same, if they wanted to create a giant and expensive SOC.
And I want to be really clear on the “giant and expensive” part: the M1 family is the sort of scale you usually see on giant workstation or server chips, the M1 Pro has 33.7 billion transistors, that’s more than a Threadripper 5995WX, a $6500 64 cores CPU. The M1 is just short of the 5950X’s transistor count (16 billions to 19.2), the M2 is above (~20).
> And what does “Everything is in one chip” even mean?
I guess he's mainly thinking of GPU, which isn't unique. But there's not that many SoCs with that amount of power in one SoC. So it's close to competing with alternatives with discrete GPUs, which does increase power consumption.
I believe it has integrated Flash controller too, which is very unique for a laptop/desktop chip, no?
> Because the memory certainly isn’t, it’s soldered on the package but it’s not part of the package, it just doesn’t take additional room on the main board. And there are a bunch of other chips on the mainboard
It's on the package so it can be as close to the SoC as possible. That decreases the capacitance of the traces, which decreases power consumption.
It's not stacked on top of the SoC, which might have been even better (but harder to cool), but it's close.
> Finally, it’s pretty much just following mobile / phone chip SoC design, so any other manufacturer could do the same, if they wanted to create a giant and expensive SOC.
Uh, yeah, anyone could copy the M1 for a laptop/desktop product. But they haven't exactly done that yet have they? That's kind of the point?
> the M1 family is the sort of scale you usually see on giant workstation or server chips
Yeah, which again, almost never packs the kind of functionality Apple does into the M1.
With that many transistors, on such an advanced process, you're going to have a lot of leakage currents, so Apple must have put an impressive amount of work into power management.
I mean, it's not just about being an SoC with lots of things packed within the chip or extremely close to the cheap (memory). No. It's apparent that they've focused on across the entire design process. Even the choice of ARM factors into that (fewer transistors needed for instruction decoding). But I wouldn't say the original comment is completely wrong.
They may be referring to the early rumours/misapprehension that the RAM was on-die and part of the SOC as well.
This isn't true AFAICT.
AMD's laptop CPUs are also "everything in one chip." Intel still has a separate chipset die on the package as far as I can tell.
This is not true. AMD CPUs do not include NVME controllers for example. There probably are more things AMD does not do on the CPU itself.
[citation needed]
Cezanne SOC topology: https://cdn.mos.cms.futurecdn.net/ix6FrojFD7anKypadF2vKM.jpg
Rembrandt SOC topology: https://cdn.mos.cms.futurecdn.net/8wnuFCokmeHjESuSbzYWMJ.jpg
Even on the desktop processors, there are onboard lanes dedicated to the NVMe itself, and the entire processor (including NVMe) is capable of booting without a supporting chipset at all - the "X300 chipset" is actually not a chipset at all, it is just using the SOC itself without an external chipset, and you do not lose NVMe functionality.
Not sure if you are using some really weird meaning of "NVMe controller" that doesn't match what anyone else means?
I meant the controller that manages the NAND cells, I assumed it would be called NVMe controller. Essentially the IC that the "NVMe controller" in your image talks to.
That would normally be called a "flash controller" and yeah, of course that lives on the SSD.
(unless it doesn't - eMMC doesn't normally have a flash controller and you just do direct writes to the flash cells... as do some IOFusion PCIe cards that are just flash directly attached to PCIe with no controller. Sometimes that functionality is just software-based. Flash cards (eg microSD) usually also do not have a flash controller directly either.)
Anyway, it's true though that Apple does push the flash controller functionality into the SOC while AMD does not, Apple implements their SSD in a non-standard fashion, that's why it's not compatible with off-the-shelf drives. The flash is just flash on a board, I don't even think it's NVMe compatible at all either.
So if you want to be maximally pedantic... neither does Apple implement onboard NVMe controllers, just flash controllers ;)
FYI, all current flash card formats have the equivalent of an SSD controller, implementing a Flash Translation Layer. Exposing raw NAND was somewhat viable in the very early days when everything was using large SLC memory cells (see SmartMedia and xD-Picture Card), but no current format could remain usable for long without wear leveling. If you can use standard filesystems and access the drive or card from multiple operating systems, then it must be providing its own FTL rather than relying on the host software for wear leveling.
The above also applies to eMMC and UFS storage as found in devices like smartphones.
Apple uses NVMe. It's just not over PCIe. It uses a proprietary mailbox interface. Which is fine since NVMe is transport agnostic.
AMD processors provide PCIe lanes, some of which are intended for use with NVMe SSDs but nothing on the chip actually implements or understands NVMe and the lanes used for the SSD in a typical system design are still fully generic PCIe lanes.
> nothing on the chip actually implements or understands NVMe
pretty sure that's false as well... can an X300 system boot a NVMe drive? I'd assume yes. It does that with a UEFI which lives... where? Oh right, on the chip.
Most NVMe SSDs don't provide legacy option roms anymore either.
> It does that with a UEFI which lives... where? Oh right, on the chip.
No, UEFI is firmware stored on a flash chip on the motherboard, usually connected to the CPU through the LPC bus. CPUs do not have any nonvolatile storage onboard (unless you count eFuses).
Lowering the power of an intel/AMD CPU enough to passively cool it would yield way more performance than currently commercially available arm CPUs. (Obviously excluding CPUs that aren't purchasable like Apple's)
Does that include the Apple M1 / M2 lines?
I have absolutely no reason to believe Apple will ever sell those to third parties. I've edited the comment to clarify.
After Apples announcement of M1, I feel like it is mandatory for such a laptop test to discuss power per watt and for how long you can game on the 11 vs 12th gen processor.
I also feel like it is worth mentioning that 12th gen laptop is priced ~ $150 over the 11th gen.
If you're coding against an online build system and 11 last an extra hour (or whatever), it's a no-brainer to stick with the old one.
Im generally not that concerned about power consumption. My laptop almost never is unplugged for more than a couple hours. I understand that there are many others who don’t use their computer like I do, but personally the only negative effect of power consumption is that it costs an extra $10 a year.
I think not everyone is going to be sensitive to this, but apart from battery life, it's super nice having a computer that's not spewing out heat or running the fans loudly.
But are you concerned with heat and fan noise?
You can't game on M1 or Mac AFAIK
Lots of people just don't game at all. Also, a lot of people like me prefer different gaming and work setup (and different rooms) in order to achieve better work-life balance. I build games on my M1 machine, just don't play on it.
How is M1 for game dev? I've been thinking of getting one for my own personal laptop, but have had a few concerns, mostly just due to the issues of building software on a totally different platform to the majority of your target users (which'll be Windows, x86). I used to have some compatibility worries but I'd guess they've gone away now.
If you are using a game engine like Unity you should not worry at all. If you need to access Windows specific APIs, of course it is a different story. A Macbook is a perfect machine for developers: you can build for android and iOS, has a great screen, perfect touchpad, finally fixed keyboard and ports.
I use Godot at the moment, but have been experimenting with frameworks like Bevy and the like. I doubt I need immediate access to windows specific api as I currently work on Fedora without any issues, though from what your saying it sounds like you mean more mobile-game dev than desktop.
To be honest the last time I used a mac was I think a decade ago so they're just so unknown to me at the moment, but everyone else seems to love them so I'm quite tempted
I mean Unity in general for desktop and mobile. Mentioned exporting to android and iOS as an extra benefit.
What are the specs of your machine? If you've tried them, how have have other engines such as Unreal fared? Do you primarily do 2D or 3D development? How long are the build times?
It depends on the type of game you're shipping. Not every library has caught up to offer a mac arm binary. You still see some Rosetta problems here and there.
You can't dual boot so you can't natively run the game you're working on. The graphics are integrated so you can't test on a PC GPU. VR dev doesn't work at all.
Lots of small things but if you also have a PC its not so bad unless you need some library that just doesn't work.
I game on my m1 macs. Maybe not the latest and greatest 3d games but for me that’s ok. Have a ps5 for that. there are a ton of older ones that just work, and even many newer ones. Currently playing baulders gate EE
https://www.applegamingwiki.com/wiki/M1_perfect_games_list
Good ref. None of my current games appear though.
With MacOS probably never in a good way as MacOS doesn't support standard graphic APIs (Vulkan)
In the near future you might if you install linux.
Using box64 for amd64 translation and steam proton for windows translation if necessary.
This is not true. I have an Macbook Pro 16 Inch M1 Max and I play a ton of games. Both Dota 2 and World of Warcraft runs without issues, you just need to do some tweaking with the settings and also framerate cap the game so you don't end up thermal throttling which introduces stuttering.
I can't believe these M1s throttle! My 2012 era computers never throttled because apparently apple had better thermals back then. I had a more recent intel mac and I sent it back because of the throttling, biding my time for M1s to come down in price but now I see that won't be worth it.
You can't play the latest and greatest (graphics-intensive games at least), but there are many titles that work just fine, and the performance of the x86 translation is surprisingly good. There are even a handful of games with native builds, like World of Warcraft and Disco Elysium.
You definitely can with emulators for other platforms, such as AetherSX2 for PS2 and Dolphin for Wii/GC.
Gaming on mac is a bit like guessing wrong in the console wars back in the day: there are far fewer games, and they’re usually late.
But if you aren’t picky about specific games, there’s something from every category that runs on m1.
Completely untrue. I replaced my gaming desktop with a Mac Mini for several months. I was able to play many games:
* League of Legends
* Civilization 6
* Stellaris, EU4, etc.
* Minecraft
* Factorio
World of Warcraft has a native ARM64 build, and that's the only game I care about.
Rather, you probably can't game on Mac.
As per most reviews 12th gen is more power efficient. Particularly for the use cases you have mentioned, as P cores wouldn't be working most of the time and E core are much more power efficient.
I think for those who primarily work on AC and do not mind increased power draw, 12th Gen is a good choice. As a software developer, I appreciate a few seconds shaved off here and there. With good thermal design, 12th Gen laptops can be silent and cool (I've never heard my MSI GT77 spin up the fans, unless I tell it to with a dedicated hardware button). I understand that people have different use cases and for lot of customers laptops need to be lightweight. I'm okay with the bulk and weight.
Just ran Phoronix Linux kernel compilation test (defconf), I get 74.68 sec on my laptop, on par with desktop Intel Core i7-12700K, but I can carry my "desktop" around. :) In comparison, Apple M2 result is 416 sec, AMD Ryzen 9 5900HX is 103 sec. It's not much, but it's compound gains if you compile and test a lot.
Do you think the very broad loose culture of macOS users (developers/creatives - coffee shops, nomadic etc) vs Windows (most corporate stuff - workstations, meetings etc) could be some factor in how they are optimised?
Are windows laptop users not allowed into coffee shops and MacOS users never have corporate meetings?
They're just tools ffs, no need to romanticize or stereotype them.
My local Starbucks checks every laptop. Customers with Windows OS on a laptop are asked to leave. Windows VMs are okay if not fullscreen.
What about ReactOS?
I’m rather disappointed in 12th Gen for mobile. I have a AMD 5800h Lenovo laptop and a 12700 Dell. The dell stutters, fans spin, etc. the Lenovo is just rock solid and fast. Never stutters or slows down. Fans spin up only when playing games.
I wish framework would do an AMD laptop. But Linus put them in touch with AMD and they done nothing.
Building a new laptop from scratch is time-taking and costly. So Framework may rightly be spending their time on optimising product, reducing cost and reaching more countries.
Linus said he put them in touch and they haven't even spoken to each other. It's going to take time, but they aren't even engaging at the moment...
No, Linus said he put them in touch, and then Linus didn't hear anything.
You're right, just looked up the transcript.
https://www.youtube.com/watch?v=SYc922ntnKM
> source community an amd variant might be
> a way around this but in spite of me
> making introductions nearly a year ago i
> haven't received any update from either
> side so consider this a public call out
small companies have less resources?
Which is why it will take time… but they are not even entertaining the idea.
I would love to see them add an AMD (and even ARM) option, but right now I applaud them for being tight-lipped about any future developments. What would they gain by acknowledging that they’re entertaining such ideas? People postponing their purchases, endlessly asking for ETA updates, getting mad when there are delays, etc. They’re much better off saying nothing and doing the work in the background. I also would understand if they choose not to offer a second CPU platform at this point. From their standpoint, it’s hugely expensive and time consuming to develop, potentially doubles their QA and support costs, makes their supply chain and inventory story more complicated, and probably makes their relationship with Intel a little worse. And realistically, they won’t get double the sales from just having an AMD option. So I would be pleasantly surprised if they offer an AMD option within 3 years, but I’m not holding my breath.
It can depend more on the graphics card(s) and the chassis type than the CPU.
For example if the Lenovo is a P15 and you’re comparing it to a Dell Latitude, then yeah duh.
The Lenovo Laptops I have are Legion 7 2021 (5800H / 3070), X1 Extreme Gen 1 (8th gen / 1050ti) vs Dell XPS 15 (12700 / 3050ti)
Both Lenovos are rock solid, not laggy or stuttery.
But I had a Dell XPS 15 (2020) model from work, I gave it back because idling in Windows made the fans spin...
I got a newer Dell XPS 15 (2022) for work (new job) and its great, but its stuttery and the fans spin under light loads. (the screen is amazing tho, i love the screen)
Yes the XPS series is a bit of a cooling nightmare but for example the Precision 3000/7000 series are quite a different story.
(We have around 10 assorted Dell laptops from all the different models and we don’t like the XPS much by comparison, and wouldn’t ever buy another. Aside from thermal issues, for whatever reason the XPS drivers are very beta and always crashing compared with the Precision series which is just rock solid)
I hate my P15v. Fans are always on doing the lightest of tasks and the battery won't last more than 2-3 hours. I dream of low power x86.
What I don't like about framework notebooks is that they seem to be mostly interested in speed, which is why they seemed to choose these P-series CPUs. With a notebook, I'm much more interested in energy consumption (how long can I work off grid), emissions like fan noise and the like. Why no U-series CPUs?
The U and P series parts are actually the same dies on 12th Gen, but binned. If you set the Power Mode to Best Power Efficiency on Windows or the equivalent power limiting in Linux, you can make a P behave pretty similarly to what a U would (with the Intel DTT settings we used).
Isn't it the H and P series that use the same die, while the U series is a much smaller die (2 instead of 6 P cores)?
You are correct, I misremembered this. The packages are pin compatible, but there are different 2+8 and 6+8 dies this generation. It was 11th Gen where the 15W and 28W were the same die.
Restricting TDP is generally an option on laptops if you really wanted to. Not sure if Framework provides any out of the box TDP switching but software like ThrottleStop can do it.
Also on Linux most distros have thermald installed and you can change the config to limit the max temperature to defacto throttle power usage. Alternatively, you can modify the RAPL limits to cap power usage directly to taste: https://community.frame.work/t/thermal-management-on-linux-w...
Yeah. I tried locking some of my older CPUs to, say, 700MHz. Still entirely usable unless I started compiling something, say.
Issue is Intel tries to race to halt which demonstrably saves more power on average. The power consumption of your chip probably is some intrinsic quality related to how the package and die are set up.
I'd like to point out that Intel allows you to tune the cores for whatever profile you want. Sure, it's not "same performance, less noise", but you can get something like an 80/20 (80% performance, 20% noise) by just scaling back the core frequencies a little. I tune my intel machine down to 8x3.2 Ghz, and can run without fans (on small form factor PC), but sometimes want the full 4+Ghz, but it's loud.
This can be done application-specific, so when you launch a big game or something, it'll spin cores up to max frequency, for example.
Could you please be more specific? Are you talking about Windows or Linux? Do you mind giving some references for how to scale core frequencies application-wise? Thanks.
It's windows only, sadly. https://www.intel.com/content/www/us/en/download/17881/intel...
There's a very simple menu system to scale up and down the core frequencies, and to select application execution profiles. It comes with internal temp and voltage monitors, and built-in stress tests to benchmark.
My main tips are to remove thermal boost, remove core scaling (the heterogeneous frequencies it assigns when only a few cores are active vs all), and to cap all cores to 3-4Ghz, depending on the quality of your passive cooling.
No bios adjustments needed, no restarts required, and built-in watchdog to revert if things go poorly. Really nice.
Recent, huge bug though: On some platforms, it requires disabling visualization, which limits use of the WSL. That had better be fixed ASAP, it's a big deal.
Well then you are minority.
I want a portable desktop, speed I can bring with me without lugging my 35lb liquid cooler setup with its 4x140mm radiator.
I want something compact and sexy but insanely fast when I “dock” it
The success of the thin-and-light segment, and the MacBook Air in particular, suggests that OP is very much in the majority.
The M1 Air is a wonder as far as I'm concerned. It's very thin and light, fanless, yet I can happily do java dev on it all day and I'm not lacking in capability or waiting around for build and test runs. It rocks.
Are you running your apps in docker?
Should I?
I have docker-desktop for the things I need it for - building the odd linux native image, or running testcontainers for postgres, same as we do when building on linux - and it works great.
But for the day to day stuff, no.
I’ve been considering an M2 air with maxed out ram for development and I’ve been wondering what kind of memory pressure people with Airs are facing.
On Intel I needed 32mb for all the containers at work, with a fairly modest CPU, but I expect my own projects to be a bit lighter.
None of the reviews are useful. All useless junk about exporting 8k videos.
I have an M1 Air where I max out the 16G RAM all the time. It works because swap works painlessly for me, but I would have been really, really happy with 24G (or better 32G). It works but I have unfortunately no breathing room in RAM.
Often, I use 11-18G, rarely I use 20-50G.
Definitely go for M2 Air if you like the form factor (I prefer the M1 Air’s form factor though).
I've got a maxed out ram M2 that I use for development.
I run Linux in Parallels with 16gb allocated and it runs great at around 50% memory pressure. MacOS compresses the memory that Linux doesn't use and it runs amazing.
We have slightly different use cases as I prefer developing in Linux as opposed to MacOS, but I'd imagine you would be fine depending on how actually memory intensive your containers are?
The thermal throttling the reviews complain about isn't a problem for me, though I prioritize power usage over raw power since I often run it off of solar.
So far I haven't noticed any issues with the 16GB in my air, but then I haven't really been looking for them either. I run a couple of containers in docker desktop, my IDE (IntelliJ) and its builds/test-runs etc, a browser with a doze or so tabs, but little else of any particular 'weight'.
I have heard anecdotal reports of those on 8GB machines facing the spinning beachball from time to time. It's never happened to me.
The only person doing that kind of content is Alex Ziskind. He compares various dev tools, build times, ML tools across Apple Silicon, Intel or Mac and PCs.
https://www.youtube.com/c/AlexanderZiskind
He’s the most useful for sure. But I couldn’t find anything about using virtualisation.
I think a lot of the macbook air's popularity comes from it being the cheapest macbook. Some people want an apple and dont have 2-3k to spend on a laptop.
The term "ultrabook" heavily disagrees with that.
Thin-and-light would mean smaller battery and harder to cool though? It seems consumers are just uninformed and instead buying the sleekest option.
I've got a sleek option because lugging around a brick is a miserable experience. There's a qualitative difference between the XPS 13 that I can whip out on the train, and a desktop replacement laptop that is heavy and big enough to be a bother and barely fits in my backpack.
Power isn't really an issue - I'm never far from an outlet, and I could carry an external battery in a backpack pocket if it really bothered me.
Then make that a segment and stop enforcing your unrealistic expectations on the rest of us finally.
Oh wait, it already is but you don’t. I routinely engage in comment sections where a highly juiced up desktop CPU is derided for it’s “heat output” and other such nonsense.
For "Linux kernel compilation speed" they have compared a 16 GB model with 64 GB model for achieving 102.60% improvement! It's so funny =)
Would increasing the RAM really make that big a difference in terms of kernel compilation time? It feels to me that 16GB should be plenty for any sort of caching that will speed things up.
I don't know about Linux compilation but... We build software written in cpp on macOS/Windows (clang/msvc).
As there are compilation units, translation of addresses, linking, etc.
Memory AND storage takes important part. While CPU might be "idle" in some parts.
For big projects especially, memory and storage speed would be important for proper benchmarking.
Yeah if you think about it in lower levels. Many optimizations use memoization to save clock speed. Therefore, you almost always need more memory to speed things up.
The only way to know would have been to compare the models :)
Yes, we intended to do an apples to apples comparison, but didn’t find the necessary units on hand when writing the post.
I think you can use `brd` or `ramdisk` (NOT tmpfs) to eat up 48GB of ram on the newer machine. Just make sure you actually write data to the disks so that the memory is allocated.
That should be an easy way to do an apples to apples comparison and not require shipping any hardware.
Saying that you intended to do something, and in the same breath that you were subsequently unable to do said thing, because you couldn't find what was required is very discouraging for a prospective customer. Also, "benchmarking" dissimilar systems comes across as marketing scam, scammy marketing, appealing to the deceivable, which some might view as reprehensible :(
Reprehensible...really? There's lots of that in the world, pretty sure this isn't it. They put the memory difference right there in the stats, it's not like they are hiding it.
Lighten up a bit. :)
Why didn't you just move the 16GB of RAM from the 11th Gen model to the 12th Gen model? Both use DDR4 after all, don't they?
Two different people on the team running the benchmark at two different locations (the challenges of being remote-first). To be clear, had we prioritized it, we could have built two equivalent systems for this benchmark at one site. However, we are mostly focused on building products, so most of our blog posts end up being someone having a interesting topic to discuss and pulling together data and assets we have on hand or can create quickly for it.
could you perhaps rerun it with one process mlockall()-ing 48GB of RAM, so both end up with 16G usable RAM?
GB, not Gb
yes, tanx
Surely the real story here is that Framework have released a CPU upgrade for a laptop?
Yep! We released an announcement around starting shipments of this two weeks ago. Our blog post this week was just a round-up of benchmarks both from reviewers and from us, to respond to common questions we got around performance improvements.
What's the plan to open the region in New Zealand?
Unfortunately you don’t ship to my country!
For MT performance and graphics Ryzen 5800U and 6800U are even better.
6800U is awesome from what I’ve seen — better efficiency by far, and legit integrated graphics better than Xe — but it is vanishingly rare, even months after announcement.
The scale Intel has in manufacturing mobile CPUs is still unmatched.
>6800U is awesome from what I’ve seen — better efficiency by far
The 6800U is only a 5% efficiency increase over 5800U, I agree that the GPU is _much_ better though.
https://www.notebookcheck.net/AMD-Ryzen-7-6800U-Efficiency-R...
That article actually shows %5 worse efficiency over a 5600U in a heavy single threaded workload. It's a bit apples to oranges and there's a lot more to efficiency on a laptop than heavy single core workloads.
Maybe I'm spoiled, but it feels like an article like this would have turned out way better if they contracted Michael Larabel from Phoronix to do an in depth benchmark run with both versions of the laptop comparing multiple type of workloads, temperature, performance per watt (or if they would have used the full Phoronix test suite themselves). Although there is nothing wrong with the current one, it feels a bit... shallow on details.
Something a little like this:
https://www.phoronix.com/review/apple-m2-linux
Just yesterday, I ran a few CPU benchmarks to compare my new System76 Lemur Pro (12th Gen i7-1255U) against my older ThinkPad X1G7 (10th Gen i7-10710U). Both are ultra low-power processors, so it's a fair comparison.
Here are the results: https://mstdn.io/@codewiz/108785771608278574
Maybe it's my imagination but it seems like more cores isn't helping performance as much? Does it seem every core after 6 cores is about 1/2 or 1/3rd the performance gain?
I don’t like that we call it 12th Gen Intel. It would be more correct to just call it the 12000 series Intel.
Intel didn’t change CPU generations from 6000-10000 series, they’re all the same generation. Just because the years advance and they refresh their products with increased product numbers doesn’t mean that they’re a new generation of products.
Aside from being pedantic, why is this relevant? I'm sure most people that care enough know that it isn't Intel's 12th generation.
It breaks where generations and numbers mix. While 12xxx and 12xx are Alder Lake, 11xxx is Rocket Lake (based on Ice Lake) and 11xx is Tiger Lake. 10xxx is based on Skylake, 10xx is Ice Lake. That’s where it’s misleading.
The number is only a product number that shows the approximate year of release for these mixed product lines.
Because of this, you can’t say things like “compared to the previous generation” any more. 12xxx is two generations newer than 11xxx. 10xxx is the same generation as 9xxx.
I don't care how fast M1 is if it can't run my 32-bit x86 Windows programs. It may as well not exist.
It can though, if you install the ARM build of Windows (in a VM), which translates instructions like Rosetta 2 does: https://docs.microsoft.com/en-us/windows/arm/overview#suppor....
What 32 bit x86 Windows programs do you still use?
Good for you?
The mainboard is really great. I’m reminded of the i9 NUC “compute element” and I’m disappointed Intel hasn’t really done anything more innovative in these kinds of form factors. Seems like they could easily standardize a swappable module like this for laptops and small computers.
https://hardwarecanucks.com/wp-content/uploads/Intel-NUC-9-E...
I so badly wish these things had TrackPoints :( the patents have expired...
The IBM Keyboard Trackpoint has evolutionary paths towards being a tiny trackball while retracted down, the trackball locks and pops up a functional joystick. The problem with the Trackpoint is it feels superslow, superheavy on the fingertip compared with glidetouch trackpads.
>The problem with the Trackpoint is it feels superslow, superheavy on the fingertip compared
That is just wrong. You only say that you don't like the default sensitivity/speed. Increase it a bit and you get the complete opposite.
The Trackpoint is in the correct place for the correct finger I use to point. I live in the Linux/BSD World and don't expect good enough support for sensitivity/speed control configuration software from Lenovo at all. And when third parties publish solutions they work briefly and I lose track of them through upgrade cycles. I want a better World with better evolved Trackpoint. The Trackpad takes my fingers off the home row and the context switch in moving my eyes to the keyboard really breaks "flow" of concentration.
Since the keyboard is replaceable you could make your own and also sell to other like-minded people?
That's quite a bit more expensive than complaining on HN, or just buying a Lenovo
The HP EliteBooks (and the new Dev One Linux variant) also have trackpoints.
Just in case anyone's interested in making their own though, I've seen a few DIY trackpoint projects over the years (mostly on custom mechs, not laptop keyboards though). Here's a writeup on one: https://edryd.org/posts/tp-mk/
The problem with many of HP's laptops that have pointing sticks, including the HP Dev One, is that they lack a middle mouse button for scrolling and middle-clicking. All Lenovo TrackPoints have that middle mouse button, which lets you scroll with the TrackPoint when it's held down.
HP Dev One keyboard (no middle mouse button): https://media.graphassets.com/TdgqLuYTbeGM2oI8PWsV
ThinkPad Carbon X1 keyboard (with middle mouse button): https://www.lenovo.com/medias/lenovo-laptop-thinkpad-x1-carb...
A keyboard with a pointing stick and no middle mouse button is a waste. Hopefully, any pointing stick keyboard for the Framework Laptop will include this button.
That second link isn't working anymore. Here's another picture of the ThinkPad X1 Carbon keyboard, with three mouse buttons:
https://images.anandtech.com/doci/9264/Keyboard2.jpg
> The HP EliteBooks also have trackpoints.
Not anymore, they are gone with Gen 9 :(
> The HP EliteBooks (and the new Dev One Linux variant) also have trackpoints.
Are those supposed to be any good?
I have an EB Gen8 with one, but I find it horrendous. I'd like to know whether it's because I have no idea how to use it, or if my Linux setup is broken, or if the hardware is actually shitty.
I had a company provided 2019 dell at my last job and the trackpoint was barely comparable to that of a 2011 thinkpad. If that's the only trackpoint experience is that some people get I understand why they'd never touch one again.
No they don't neither do the Dells, these do not have negative inertia because IBM patented that.
And yes, there are DIY TrackPoint keyboards, I have a TEX Shinobi.
Ryzen 5800U and 6800U are even better than its predecessors in terms of MT performance and graphics.
If I’m working a job where I really need performance, I try my hardest to get a desktop. The last 20 years of hardware development has not replaced the workstation for even light computational workloads.
My biggest frustration with remote work so far is the corporate fixation with laptops. For some types of work, performance isn’t just a convenience, but weeks of lost productivity. If the difference between the 11th and 12th generation of Intel in your laptop really moves the needle for you, you’re using the wrong form factor to do your work.
Yes, AMD and ARM exist, but the fact that frame.work exists is a bigger win than any drawbacks of them using Intel.
In the first table, in the first row (11th Gen), the professional column is missing the core count. It currently says "(Up to 4.8 GHz, Cores)" and by https://ark.intel.com/content/www/us/en/ark/products/208664/... it should be "(Up to 4.8 GHz, 4 Cores)".
Thanks for catching that! I’ve forwarded it to the team.
The Apple M1 processor should be used as a benchmark. Not other Intel processors.
AMD and high performance ARM mainboards would be nice.
I'm developing in rust for $dayjob and I absolutely hated it until I got a M1 mac. CPU speed (including single core speed) absolutely made the difference. The compilation speed crossed that threshold between "a little pause" and getting distracted doing something else while waiting for a little incremental compile
I'm also developing in Rust, with a 12th gen i7-1270P. It's an absolute beast; I'd say it's on the range of 2x speedup vs my previous laptop (11th gen i7).
I have no idea about power consumption yet (the laptop is a couple weeks old). On light load, the laptop easily crosses the 8h mark but it won't, certainly, when grinding rust compiles. The M1's power efficiency is on a different league.
I'm not sure why the Framework laptop is getting so much credit for being upgradable - the main board upgrade (https://www.youtube.com/watch?v=mog6T9Rd93I) seems to be an identical process as my Dell Precision 5530.
legit question, does Dell offer a drop-in generational upgrade for your machine? I think the main appeals are:
1) its simple to repair or upgrade. If Dell's design is also simple to work on then that's great!
2) A culture of providing replacement and upgrade parts affordably to the end user. In addition to the I/O modules and replacement parts, so far framework has the upgradeable mainboard, an upgraded top cover with more rigidity, and an improved hinge kit.
I hope this doesn't mean that frame.work is planning to stick with Intel processors even for its next gen laptops? Many of us are eagerly waiting for the AMD models.
On a slightly different note, this is again a good example of how Intel continues to oneup AMD through its business and management practices, even when AMD has better products. Ofcourse AMD can't be imitate Intel in spending money and undercutting them (sometimes illegally), but they should certainly try to copy some of the decent practices of Intel, like for e.g. supporting hardware manufacturers by offering reference designs. China has many Intel tablets because of this. It is easier to develop hardware with Intel processors because Intel offers more support (free and paid). Something AMD needs to focus on more and fix. They really need to work on their PR and sales tactic.
I would have been impressed a decade ago, but this is just an extremely marginal improvement, even if you can trust them, which you know intel, so you cant.
> 11th Gen Intel® Core™ i7-1185G7 (with *16GB* of RAM) 224.49
> 12th Gen Intel® Core™ i7-1280P (with *64GB* of RAM) 110.77
> % Improvement 102.60%
Are they for real?
Although CS:GO is an FPS that can run on a potato, the GPU results are interesting. But I would love to see a Framework laptop with a Ryzen APU (I have a Lenovo with a 5700U and it is nothing short of amazing, even though it’s almost two years old by now). Provided they can do Thunderbolt, that is (I rely on that for a number of things).
So.... is the Framework laptop with a 12th Gen core noisy or not? Is it quiet most of the time? Do the fans become noisy during a compile or benchmark? Does it remain relatively cool? What are people's experiences with these laptops? My primary use would be C++ development with frequent large multi-core builds.
I've been drooling over the framework laptops for a while now, can't wait until they're available in Japan.
Very glad I ordered my Framework with the 1240P.
The performance benefits of the higher-end parts seem dubious.
Is it far to compare 11th Gen U-series to 12th Gen P-series? Performance is higher, but this comes at the expensive of much higher power draw. I think 11th Gen U-series vs 12th Gen U-series is a more interesting comparison.
Those figures are useless without power consumption.
Can't wait for raptor lake to come out with new core architecture, higher peak clocks and more E cores.
do intel have vulkan drivers yet?
XP11.11 (2018) is ancient, current release is from late last year - but more importantly XP12 is coming very soon and going to be vulkan/metal only, and afaik even the Xe intel gpus are lacking drivers.
Cool, now compare it to an M2
For some people comparing to Apple is useless - Macbooks could be a thousand times faster and last for a year on battery, they still would be 100% useless to me because they don't run the software I need on a daily basis.
What software do you find irreplaceable?
The 12th gen i7 seems to fare pretty well against M2 and gets higher scores against M2.
https://www.cpu-monkey.com/en/cpu-apple_m2
Cool, now show me upgrading your M1 to an M2.
A YouTuber, Luke Miani, gave it a try recently. Of course it doesn't work, but it was interesting to see that mechanically everything fits and the chassis is of course practically identical, but it doesn't work because FU that's why: https://www.youtube.com/watch?v=q_jckhYfGBw
No surprises there. Apple also firmware locks the swappable storage modules on the recent Mac Studio: https://www.engadget.com/mac-studio-ssd-software-lock-211410... No repairs or upgrades for their customers.
I think 18 days ago you might have said something to the the tune of you’d never do business with Apple again. You changed your mind fast. https://news.ycombinator.com/item?id=32188366
This is ridiculous. Why do people behave this way?
I think you misread the question?
They asked how you upgrade an M1 to an M2. In this scenario there are no x86 chips.
Funny fanboy
Does it matter? I have a Threadripper 3970x and an M1 Macbook Air. For anything that can't use more than 4 cores, the M1 crushes it. The Threadripper also draws over 100W at idle.
Whatever AMD and Intel have right now are completely irrelevant, and I say that as one of the biggest haters of OS X there is.
I'm not trading in my workstation for a Mac Studio or whatever, though, because I'm guessing AMD's 5nm chips will probably perform even better than the M1/M2. Plus games. Games are nice.
For a site about "hacker" news, this comment is wild to me.
It seems we've truly reached the apex of disposable consumer culture, where even our most powerful technology is something we just throw in the trash and re-purchase when something fails or newer technology becomes available.
I can only hope that companies like Framework can reverse some of this trend. Because as it stands, our already unsustainable culture has gotten to the point where it seems folks no longer remember that anything else is possible, or why we might want to think differently.
I guess "stuff" isn't really the problem with the current society. If you write software for a living, the company that has a 50% faster hack/test/ship cycle is the one that wins the market. It is a shame that you have to throw away a power supply, keyboard, and screen every few years to get that faster cycle, and Framework helps there.
But at the same time, the chips that Framework uses waste valuable electricity and don't perform particularly well. That's just where Intel is at right now. That is likely to change, and Framework is interesting from the standpoint of "when Intel gets their act together, look at our balance of performance and sustainability". That's not where they're at right now, though. Not Frameworks' fault; nobody is selling mobile arm64 chips that have remotely competitive performance. It's the late 1970s again; hopefully the clones show up to kill "IBM" again.
Realistically, people are burning the rainforest to sell pictures of smoking monkeys and burn dead dinosaurs to drive to get coffee every morning. Recycling your laptop's screen every 5 years is not really making a dent into our current planet-killing habits. It feels good to help in a small way, but what feels good is not necessarily meaningful.
> But at the same time, the chips that Framework uses waste valuable electricity
I pretty well guarantee you the small amount of energy saved by operating an M1 is more than made up for by the cost of manufacturing and shipping a new laptop every few years as part of the Apple replacement cycle.
> Recycling your laptop's screen every 5 years
You might want to do a little research about what's involved in upgrading a framework mainboard. Far far more than just the screen is "recycled" when upgrading a machine like that.
And that's ignoring the ability to reuse or resell components outside of the Framework chassis. I fully expect to turn my old mainboard into a server when the time comes to eventually upgrade it.
> It feels good to help in a small way, but what feels good is not necessarily meaningful.
This is a classic thought-killing argument intended to invalidate any kind of individual action or intervention. "Well, what little thing I do has no meaningful impact in the big picture, so why bother."
The answer to that is simple: Government needs to pass laws mandating right to repair. It's long past time that companies like Apple be forced to do the right thing, because it's clear they're not going to do it themselves, particularly when consumers have apparently convinced themselves that their individual choices don't matter.
> Does it matter?
You are commenting on a thread discussing just that, so yeah it kinda matters.
From the Cinebench scores in the linked post vs some M2 benchmarks, the M2 (score of 1695) is comparable to the bottom tier 12th gen Framework in the single core benchmark (framework scores: [1691, 1723, 1839]. On the multi-core benchmark, the M2 is significantly worse- 8714 vs [9631, 10027, 11076].
The main advantage of the M2s is probably battery life.
Cinebench is a terrible benchmark and heavily favors CPUs that have many slow CPU cores.
Also, Cinebench is not optimized for ARM instructions.
If you are buying a computer to use Cinema4D or gaming, Intel and and AMD will be better. If you're buying a computer to do just about everything else, then Apple is better.
If better benchmarks exist and you happen to have the scores handy, feel free to post the comparisons but I feel like my work here is done.
Performance wise, quite favorable. Battery and power consumption, not so much.
Performance wise, the M2 is still faster.
It isn't https://nanoreview.net/en/cpu-compare/intel-core-i7-12700h-v...
M2 is faster. Faster ST, GPU, video editing, AI and at significantly lower power.
It loses on MT but that's because we don't have the M2 Pro/Max yet. M2 is just the entry-level SoC being compared to Intel's very best.
Also, try running the benchmarks unplugged. Intel's performance will instantly drop by 25 - 50%. M2 will stay exactly the same.
I don't disagree with you that M2 is more efficient, but let's not move goal posts after saying "Performance wise, the M2 is still faster.".
I didn't move the goal post.
M2 is faster in ST, GPU, AI, and video editing. It's slower on MT. To me, the M2 is overall faster. Especially when unplugged which will cause Alder Lake to significantly drop in performance.
What is your definition of faster?
> Especially when unplugged which will cause Alder Lake to significantly drop in performance.
At least in a laptop, it's not just when you unplug the laptop that performance drops. You've also got to have enough volume in the laptop to shoehorn in a major cooling system if you're going to keep the P series processors from throttling.
For instance, Lenovo's thin and light ThinkPad X1 Yoga has two fans but still throttles:
>Unfortunately, the laptop got uncomfortably hot in its Best performance mode during testing, even with light workloads.
https://arstechnica.com/gadgets/2022/07/review-lenovos-think...
Or the Dell XPS 13 Plus:
>the XPS 13 Plus’ fan was really struggling here because, boy oh boy, did this thing get hot.
After a few hours of regular use (which, in my case, is a dozen or so Chrome tabs with Slack running over top), this laptop was boiling. I was getting uncomfortable keeping my hands on the palm rests and typing on the keyboard. Putting it on my lap was off the table.
https://www.theverge.com/23284276/dell-xps-13-plus-intel-202...
Intel's P series chips in a device without a massive heat sink and fans can't hit those high clocks that their performance benchmarks rely on.
Beats M1?
Title should reflect this is about some OEM and not Intel's offering per se.