Daring Fireball
Stephen Hackett: Our 14-day national nightmare is over. As of Developer Beta 2, the Finder icon in macOS Tahoe has been updated to reflect 30 years of tradition: I’m going to strongly disagree here. The Tahoe beta 2 Finder icon is slightly better, but seeing it this way makes it obvious that the problem with the Tahoe Finder icon isn’t whether it’s dark/light or light/dark from left to right. It’s that with this Tahoe design it’s not 50/50. It’s the appliqué — the right side (the face in profile) looks like something stuck on top of a blue face tile. That’s not the Finder logo. The Finder logo is the Mac logo. The Macintosh is the platform that held Apple together when, by all rights, the company should have fallen apart. It’s a great logo, period, and the second-most-important logo Apple owns, after the Apple logo itself. Fucking around with it like this, making the right-side in-profile face a stick-on layer rather than a full half of the mark, is akin to Coca-Cola fucking around with the typeface for the word “Cola” in its logo. Like, what are you doing? Why are you screwing with a perfect mark? There are an infinite number of ways Apple could do this while remaining true to the original logo. Here’s a take from Michael Flareup that glasses it up but it keeps it true to itself: Especially in the field of computers, no company can be a slave to tradition and history. But you ought to respect it. ★
From iyO’s home page: The iyo one is a revolutionary new kind of computer without a screen. it can run apps just like your smartphone. The key difference is you talk to it through a natural language interface. Like I wrote yesterday, I’d never heard of iyO before. But from the description above, you can obviously see how they’d feel like the new OpenAI/LoveFrom io name stomps on their trademark. (One minor curiosity: iyO itself seems unsure how to capitalize the letters in its own name: a single cropped screenshot of their own home page shows “iyO”, “IyO”, and “iyo”. iyO “graduated” from X (which is entirely separate from Elon Musk’s X), Google’s “moonshot factory”, in 2021. The description there: iyO is on a mission to bring natural language computing to billions of people. The team has created the world’s first audio computer that you can talk to like a friend. While at X, the team developed their initial prototypes. Now an independent company, iyO is creating screenless, natural language computing with mixed audio reality. Despite having “graduated” four years ago, iyO is still only taking pre-orders for the iyO One, their $100 ungainly-looking ear computer. ($100 seems too good to be true for what they’re promising.) Lastly, last April, iyO founder and CEO Jason Rugolo demonstrated prototypes in a 13-minute TED talk. Seems cool, but some of the features already exist with AirPods, and all of the feature could exist with AirPods. I don’t see the future of a dedicated audio computer — especially ones as ugly as these — when the entire feature set can be duplicated with smart earbuds paired to your phone. ★
Juli Clover, MacRumors: Apple today provided developers with the second betas of iOS 26 and iPadOS 26 for testing purposes, with the updates coming two weeks after Apple seeded the first betas following the WWDC keynote. MacOS, tvOS, WatchOS, and VisionOS too. All sorts of good stuff in these second betas — an option to have a real big boy menu bar in MacOS Tahoe, a much better-looking Control Center, and more. ★
Brooks Barnes, writing for The New York Times: Pixar knew that Elio, an original space adventure, would most likely struggle in its first weekend at the box office. Animated movies based on original stories have become harder sells in theaters, even for the once-unstoppable Pixar. At a time when streaming services have proliferated and the broader economy is unsettled, families want assurance that spending the money for tickets will be worth it. But the turnout for Elio was worse — much worse — than even Pixar had expected. The film, which cost at least $250 million to make and market, collected an estimated $21 million from Thursday evening through Sunday at theaters in the United States and Canada, according to Comscore, which compiles box office data. It was Pixar’s worst opening-weekend result ever. The previous bottom was Elemental, which arrived to $30 million in 2023. Harry McCracken: I wasn’t aware this movie had come out, and still can’t tell you what it’s about. And I’ve been a Pixar fan since before they made movies. That seems like a problem. I hadn’t heard of this movie until today either. Disney and Pixar have a marketing problem. One part of the problem is that Pixar has made some decidedly meh movies in recent years. “Pixar” used to stand for nothing less than excellence. Now it stands for “somewhere in the range of OK to great”. But another is that even when they make a good one — which Elio might be — they suck at getting word out. ★
Hayden Field, reporting for The Verge: OpenAI has scrubbed mentions of io, the hardware startup co-founded by famous Apple designer Jony Ive, from its website and social media channels. The sudden change closely follows their recent announcement of OpenAI’s nearly $6.5 billion acquisition and plans to create dedicated AI hardware. OpenAI tells The Verge the deal is still happening, but it scrubbed mentions due to a trademark lawsuit from Iyo, the hearing device startup spun out of Google’s moonshot factory. If you visit the “Sam and Jony” page on OpenAI’s website — where the short film teasing io used to be — it now simply says: This page is temporarily down due to a court order following a trademark complaint from iyO about our use of the name “io.” We don’t agree with the complaint and are reviewing our options. Perhaps I’m not paying close enough attention, but this is the first I’ve heard of iyO. The two names certainly sound alike but they don’t look alike. Are homophones trademarkable? ★
Joe Rossignol at MacRumors: Apple has marked its day-old The Parent Presentation video on YouTube as private, meaning that it is no longer available to watch. Apple has also moved The Parent Presentation to the bottom of its College Students page, effectively burying it. When we reported on the marketing campaign yesterday, the presentation was prominently featured at the top of the page. It is unclear why Apple is suddenly hiding the ad, or if it will return. Apple did not immediately respond to our request for comment. On social media, some people said that the ad was cringe or gross, so perhaps Apple pulled the video due to overly negative reception. To be clear, this is merely speculation, and there were others who found humor in the video. The 7.5-minute video, which at the moment is still available to watch from re-uploads on YouTube and X — stars Martin Herlihy from SNL’s “Please Don’t Destroy” triumvirate. I wouldn’t describe it as “cringe”, but I also wouldn’t describe it as “funny”. (If Herlihy wrote this, it would suggest that his cohorts Ben Marshall and John Higgins are the funny ones in the trio.) It’s also not the least bit offensive, so it really is unclear why Apple pulled it. If it’s because it’s not funny, how did it not only get approved and produced, but posted for 24 hours? Is Apple’s new marketing strategy to just publish new ads and then wait to see how the world reacts before deciding if they’re any good or not? One obvious problem with “The Parent Presentation” video is that the gist is that everyone involved is stupid: high school kids (the ostensible target audience?) are too stupid to know how to ask their parents for a MacBook for college, parents are too stupid to know they should buy their kids a good laptop, and even Herlihy’s lecturer is a doofus who himself doesn’t know how to deliver a presentation. I don’t know how this got past the concept stage. To top things off, the downloadable slide presentation — which Apple still has available in Keynote, PowerPoint, and Google Slides formats — is entirely typeset in Arial. I would take my son’s MacBook away from him if he came to me with a presentation set in Arial. ★
Joe Rossignol, writing for MacRumors: A bit of sad news for old iPods: Macs might be losing FireWire support. The first macOS Tahoe developer beta does not support the legacy FireWire 400 and FireWire 800 data-transfer standards, according to @NekoMichi on X, and a Reddit post. As a result, the first few iPod models and old external storage drives that rely on FireWire cannot be synced with or mounted on a Mac running the macOS Tahoe beta. Unlike on macOS Sequoia and earlier versions, the first macOS Tahoe beta does not include a FireWire section in the System Settings app. All good things must come to an end, and FireWire was a very good thing indeed. High-performance, reliable, easy to use. Apple, back in 2001, “Apple FireWire Wins 2001 Primetime Emmy Engineering Award”: Apple’s FireWire technology will be honored by the Academy of Television Arts & Sciences in an awards presentation held tonight at the academy’s Goldenson Theatre in Hollywood. Apple will receive a 2001 Primetime Emmy Engineering Award for FireWire’s material impact on the television industry. Apple invented FireWire in the mid-90s and shepherded it to become the established cross-platform industry standard IEEE 1394. FireWire is a high-speed serial input/output technology for connecting digital devices such as digital camcorders and cameras to desktop and portable computers. Widely adopted by digital peripheral companies such as Sony, Canon, JVC and Kodak, FireWire has become the established industry standard for both consumers and professionals. ★
Tom Nichols, writing for The Atlantic (gift link): President Donald Trump has done what he swore he would not do: involve the United States in a war in the Middle East. His supporters will tie themselves in knots (as Vice President J. D. Vance did last week) trying to jam the square peg of Trump’s promises into the round hole of his actions. And many of them may avoid calling this “war” at all, even though that’s what Trump himself called it tonight. They will want to see it as a quick win against an obstinate regime that will eventually declare bygones and come to the table. But whether bombing Iran was a good idea or a bad idea — and it could turn out to be either, or both — it is war by any definition of the term, and something Trump had vowed he would avoid. [...] Only one outcome is certain: Hypocrisy in the region and around the world will reach galactic levels as nations wring their hands and silently pray that the B-2s carrying the bunker-buster bombs did their job. See also: Timothy Snyder, on Bluesky: Five things to remember about war: Many things reported with confidence in the first hours and days will turn out not to be true. Whatever they say, the people who start wars are often thinking chiefly about domestic politics. The rationale given for a war will change over time, such that actual success or failure in achieving a named objective is less relevant than one might think. Wars are unpredictable. Wars are easy to start and hard to stop. ★
Julian Chokkattu, writing for Wired: You can’t mount a cinema camera on a Formula One race car. These nimble vehicles are built to precise specs, and capturing racing footage from the driver’s point of view isn’t as simple as slapping a GoPro on and calling it a day. That’s the challenge Apple faced after Joseph Kosinski and Claudio Miranda, the director and cinematographer of the upcoming F1 Apple Original, wanted to use real POV racing footage in the film. If you’ve watched a Formula One race lately, you’ve probably seen clips that show an angle from just behind the cockpit, with the top or side of the driver’s helmet in the frame. Captured by onboard cameras embedded in the car, the resulting footage is designed for broadcast, at a lower resolution using specific color spaces and codecs. Converting it to match the look of the rest of the F1 film would be too challenging to be feasible. Instead, Apple’s engineering team replaced the broadcast module with a camera composed of iPhone parts. I think back to Phil Schiller, on stage at my WWDC show in 2015, saying that Apple viewed itself then not just as one of the leading camera companies in the world, but the leading camera company in the world. ★
Cynthia Littleton, in a long profile for Variety: When pressed about what Apple’s investments in movies and TV shows have meant for the company as a whole, Cook explains that Apple is at heart “a toolmaker,” delivering computers and other devices that enable creativity in users. (This vision for the company, and the “toolmaker” term specifically, was first articulated by Jobs in the early 1980s.) “We’re a toolmaker,” Cook says again. “We make tools for creative people to empower them to do things they couldn’t do before. So we were doing lots of business with Hollywood well before we were in the TV business. “We studied it for years before we decided to do [Apple TV+]. I know there’s a lot of different views out there about why we’re into it. We’re into it to tell great stories, and we want it to be a great business as well. That’s why we’re into it, just plain and simple.” [...] Media analysts and observers have wondered how the content side of Apple threads together with the hardware sales that fuel the core business. As Cook sees it, that’s not the point, although such connections are emerging organically in the course of doing business, as evidenced by “F1” and the camera tech. “I don’t have it in my mind that I’m going to sell more iPhones because of it,” Cook says. “I don’t think about that at all. I think about it as a business. And just like we leverage the best of Apple across iPhones and across our services, we try to leverage the best of Apple TV+.” Apple TV+ has been killing it with original shows. Maybe with F1 they can start bringing that magic to movies. ★
“This one a long time have I watched. All his life has he looked away — to the future, to the horizon. Never his mind on where he was. What he was doing. Adventure. Heh! Excitement. Heh! A Jedi craves not these things. You are reckless!” The Empire Strikes Back My biggest takeaway from WWDC 2025 is that Apple seemingly took some lessons to heart from its unfulfilled promises of a year ago. This year’s WWDC wasn’t merely focused on what Apple is confident it can ship in the next 12 months, but on what they can ship this fall. I might be overlooking a minor exception or two, but every major feature announced in the WWDC 2025 keynote was both demonstratable in product briefings, and is currently available in the developer beta seeds. I was also told, explicitly, by Apple executives, that Apple plans to ship everything shown last week in the fall. That’s as it should be, and a strong return to form for the company. It takes confidence to promise only what you know you can ship, and it takes execution to ship what you’ve promised. If there’s more coming in the early months of 2026, announce those features when they’re ready. It’s proven very effective for Apple to spread the debut of new features across the entire calendar year, with many major features not appearing until the .3, .4, or even .5 OS releases. I think it will prove just as effective marketing-wise to spread the announcement of more features throughout the year as well. Year-Based Version Numbers There’s no question that it’s a little weird for every one of Apple’s platforms to have jumped to version 26. I mean, VisionOS skipped 21 version numbers. Presumably, when Apple next unveils a new OS (HomeOS?), it’s going to start at version 26, 27, or 28. But I’m already getting used to this, and I think the underlying logic laid out by Craig Federighi at the outset of the keynote is true: with Apple now up to six developer platforms (Mac, iPhone, iPad, Vision, TV, Watch), it had gotten hard to keep track of which version numbers corresponded to the same year. That matters not just for the convenience of knowing, in years to come, when specific versions of each OS were released, but it also matters because none of these platforms exist in isolation. They’re all parts of a cohesive whole, a cross-device “Apple OS 26” experience, as it were. One thing I haven’t seen commented on, though, is that switching to year-based version numbers establishes as de facto policy something that has now been true for quite a few years, but which Apple has never officially acknowledged: that each of these platforms will get a major version release annually. 20 years ago the update schedule for Mac OS X was rather erratic: Mac OS X 10.7 LionJuly 2011 Mac OS X 10.6 Snow LeopardAugust 2009 Mac OS X 10.5 LeopardOctober 2007 Mac OS X 10.4 TigerApril 2005 Mac OS X 10.3 PantherOctober 2003 OS X 10.8 Mountain Lion (which began the odd four-year run where the Mac’s OS name didn’t contain “Mac”) arrived in July 2012, and thereafter a new major version has shipped in September, October, or November (MacOS 11 Big Sur, in 2020) every single year. This rigorous annual schedule is a hallmark of the Tim Cook era at Apple, and clearly reflects his personality (as the erratic/idiosyncratic schedule of the mid-2000s reflected Steve Jobs’s). iPadOS Windowing The pedant in me is mildly perturbed that the new windowing system unveiled for iPadOS 26 is largely being discussed under the term “multitasking”. It’s windowing. One way to understand the difference is that the original Mac OS (a.k.a. System 1) had windowing — windowing that looked and worked a lot like this — but no multitasking. The very early Mac could run just one app a time, but the running app could open multiple windows. But, whatever. It’s all good. One thing I find interesting is that while split screen and Slide Over have been eliminated in the new system (praise be), Stage Manager is still a feature. Just plain windowing is as it should be: ad hoc. You make windows and move them around and resize them however you want. Stage Manager is fussier — it’s a more complex system for users who wish to organize their windows into something akin to projects or related tasks. So, effectively, Apple, three years ago, jumped straight to a more complex, more fiddly option — Stage Manager — and only now has added the simpler, more obvious, not fiddly at all option (windowing). It’s been a weird journey, but I think iPadOS has finally arrived at a place where showing more than one app or document at a time on-screen is what it should have been all along: easy and obvious. Liquid Glass Alan Dye, introducing Liquid Glass, around the 8m:20s mark in the keynote: Software is the heart and soul of our products. It brings them to life, shapes their personality, and defines their purpose. At Apple, we’ve always believed it’s the deep integration of hardware and software that makes interacting with technology intuitive, beautiful, and a joy to use. iOS 7 introduced a simplified design built on distinct layers, smooth animations, and new colors. It redefined our design language for years to come. Now, with the powerful advances in our hardware, silicon, and graphics technologies, we have the opportunity to lay the foundation for the next chapter of our software. Today we’re excited to announce our broadest design update ever. Our goal is a beautiful new design that brings joy and delight to every user experience, one that’s more personal, and puts greater focus on your content, all while still feeling instantly familiar. And for the first time, we’re introducing a universal design across our platforms. This unified design language creates a more harmonious experience as you move between products, while maintaining the qualities that make each unique. Inspired by the physicality and richness of VisionOS, we challenged ourselves to make something purely digital feel natural and alive. From how it looks to how it feels as it dynamically responds to touch. To achieve this, we began by rethinking the fundamental elements that make up our software, and it starts with an entirely new expressive material we call Liquid Glass. With the optical qualities of glass and a fluidity that only Apple can achieve, it transforms depending on your content or even your context, and brings more clarity to navigation and controls. It beautifully refracts light and dynamically reacts to your movement with specular highlights. This material brings a new level of vitality to every aspect of your experience. From the smallest elements you interact with to larger ones, it responds in real time to your content and your input. Creating a more lively experience that we think you’ll find truly delightful. Compare and contrast to Steve Jobs introducing Aqua at Macworld San Francisco in January 2000: So this is the architecture, except there’s one more thing. The one more thing is, we have been secretly for the last 18 months designing a completely new user interface. And that new user interface builds on Apple’s legacy and carries it into the next century. And we call that new user interface Aqua, because it’s liquid. One of the design goals was when you saw it, you wanted to lick it. [...] When you design a new user interface, you have to start off humbly. You have to start off saying, what are the simplest elements in it? What does a button look like? And you spend months working on a button. That’s a button in Aqua. This is what radio buttons look like. Simple things. This is what checkboxes look like. This is what popup lists look like. Again, you’re starting to get the feel of this, a little different. This is what sliders can look like. Now, let me show you windows. This is what the top of windows look like. These three buttons look like a traffic signal, don’t they? Red means close the window. Yellow means minimize the window. And green means maximize the window. Pretty simple. And tremendous fit and finish in this operating system. When you roll over these things, you get those. You see them? And when you are no longer the key window, they go transparent. So a lot of fit and finish in this. In addition to the fit and finish, we paid a lot of attention to dynamics. Not only how do things look, but how do they move, how do they behave. And our goal in this user interface was twofold. One, we wanted to give a much more powerful user interface to our pro customers. But two, at the very same time, we wanted to make this the dream user interface for somebody who’s never even touched a computer before. And that’s really hard to do. It’s like when we do films at Pixar. It’s really easy, it’s a lot easier, to make a film that appeals to five-year-olds and under. But it’s very difficult to make one film that five-year-olds love and that their parents also love. And that was the goal of this user interface. To make it span the range so that people turning on their iMac for the first time were enchanted with it, and it was super easy to use, and yet, our pro customers also felt, My God, this takes me to places I thought I could never get to. And that’s what we tried to do. Re-watching Jobs’s introduction of Aqua for the umpteenth time, I still find it enthralling. I found Alan Dye’s introduction of Liquid Glass to be soporific, if not downright horseshitty. But the work itself, Liquid Glass as it launched last week, is very reminiscent of Aqua a quarter century (!) ago. It’s exciting, it’s fresh, it fundamentally looks and feels very cool in general — and but in practice quite a few aspects of it feel a bit over-the-top and/or half-baked. Just like with Aqua, it will surely get dialed in. Legibility problems will be addressed. Liquid Glass has been in the works for a long time, but what we see today has come together very quickly. For those using internal builds inside Apple, what Apple unveiled last week is effectively the third version of Liquid Glass. Just a few weeks prior to WWDC, a few sources told me that internal builds were such a complete mess that they wondered if it would come together in time for WWDC developer betas. But come together it has. I expect a lot of visual changes over the course of the summer, and significant evolutionary tweaks in the next few years. Across Apple’s own apps, there are are a lot of places where things haven’t yet been glassed up at all. That’s how these things work. As for why, it should be enough to justify Liquid Glass simply for the sake of looking cool. I opened this piece with a quote from a great fictional philosopher. I’ll close it with a quote from a great real one: “The test of a work of art is, in the end, our affection for it, not our ability to explain why it is good.” —Stanley Kubrick
Peter Kafka: So in March, when Gruber announced that Something is Rotten in the State of Cupertino — focusing on Apple’s botched plans to imbue its ailing Siri service with state-of-the-art AI — lots of people paid attention. Including, apparently, folks at the very top of the Apple org chart. I talked to Gruber about the fallout from that post. Which is pretty interesting! But there’s a lot more going on in this conversation. It’s partly about the friction Apple has been generating lately — not just about its AI efforts, but the way it runs its App Store, and the way it interacts with developers — and why all of that does and doesn’t matter. And it’s also about the delightfully retro practice of running an ad-supported blog in 2025. That works very well for Gruber, but it seems like the new Grubers of the world are doing their work on YouTube or Substack. He’s got some thoughts about that, too. Good interview, I thought — I always enjoy talking to Kafka. No permalink for the episode on the web, so my main link for this post is to Overcast. Here’s a link to Apple Podcasts too. ★
Nicolas Lellouche, writing for the French-language site Numerama (block quote below is from Safari’s English translation) (via Joe Rossignol at MacRumors): What is the problem with Europe? Apple does not explain it very clearly, but suggests that the European Union’s requests for opening create uncertainties. It is likely that the brand suspects Europe of forcing it to open macOS to devices other than the iPhone if this function were to happen. A mandatory iPhone Mirroring on Windows or an Android Mirroring on Mac may not be in his plans. The other probability is the question of gatekeepers, raised in 2024. Apple would fear that macOS will be on the list of monitored platforms if it can emulate iOS, one of the gatekeepers monitored by Europe. The problem isn’t about MacOS getting flagged as another “gatekeeping” platform under the DMA. Whether or not Apple enables iPhone Mirroring on MacOS in the EU would have no bearing on whether the Mac is deemed a gatekeeper. The DMA defines a “gatekeeper” platform as “a core platform service that in the last financial year has at least 45 million monthly active end users established or located in the Union and at least 10,000 yearly active business users established in the Union”. I’m not sure how many Mac users there are in the EU, but I’m pretty sure the number is well under 45 million. (Estimates seem to peg the worldwide number of Mac users at just over 100 million.) The problem is simply that the iPhone is a gatekeeping platform, and iPhone Mirroring obviously involves the iPhone. The EU’s recent demands regarding “interoperability requirements” flag just about every single feature that involves an iPhone communicating with another Apple device. AirDrop, AirPlay, AirPods pairing, Apple Watch connectivity — all of that has been deemed illegal gatekeeping. Clearly, iPhone Mirroring would fall under the same interpretation, thus, iPhone Mirroring isn’t going to be available in the EU. If the DMA had been in place 15 years ago, the EU wouldn’t have AirDrop or AirPlay and perhaps wouldn’t have Apple Watch or AirPods, either. If Apple made iPhone Mirroring available in the EU now, my guess is the European Commission would add it to the interoperability requirements list, and demand that Apple support mirroring your iPhone to all other platforms, such as Windows and Android. They might also demand that Apple add support to iOS for third-party screen mirroring protocols. Several weeks ago, Apple indicated that other new products may be blocked in Europe in the future. What about what’s new in iOS 26? Apple is not commenting at the moment, since it must verify the compatibility of its new functions with the European Union. Some new features, such as the Phone application on Mac to make calls with your iPhone, seem difficult to be compatible with the vision of Europe. The new Phone app on MacOS is almost certainly not coming to the EU, unless the European Commission changes its stance on these interoperability requirements. ★
John Voorhees, writing at MacStories, regarding a new command-line transcription tool cleverly named Yap written by his son Finn last week during WWDC: On the way, Finn filled me in on a new class in Apple’s Speech framework called SpeechAnalyzer and its SpeechTranscriber module. Both the class and module are part of Apple’s OS betas that were released to developers last week at WWDC. My ears perked up immediately when he told me that he’d tested SpeechAnalyzer and SpeechTranscriber and was impressed with how fast and accurate they were. [...] What stood out above all else was Yap’s speed. By harnessing SpeechAnalyzer and SpeechTranscriber on-device, the command line tool tore through the 7GB video file a full 55% faster than MacWhisper’s Large V3 Turbo model, with no noticeable difference in transcription quality. At first blush, the difference between 0:45 and 1:41 may seem insignificant, and it arguably is, but those are the results for just one 34-minute video. Extrapolate that to running Yap against the hours of Apple Developer videos released on YouTube with the help of yt-dlp, and suddenly, you’re talking about a significant amount of time. Like all automation, picking up a 55% speed gain one video or audio clip at a time, multiple times each week, adds up quickly. Apple’s Foundation Models sure seem to be the sleeper hit from WWDC this year. This bodes very well for all sorts of use cases where transcription would be helpful, like third-party podcast players. ★
Bungie: Through every comment and real-time conversation on social media and Discord, your voice has been strong and clear. We’ve taken this to heart, and we know we need more time to craft Marathon into the game that truly reflects your passion. After much discussion within our Dev team, we’ve made the decision to delay the September 23rd release. The Alpha test created an opportunity for us to calibrate and focus the game on what will make it uniquely compelling — survival under pressure, mystery and lore around every corner, raid-like endgame challenges, and Bungie’s genre-defining FPS combat. We’re using this time to empower the team to create the intense, high-stakes experience that a title like Marathon is built around. This means deepening the relationship between the developers and the game’s most important voices: our players. Translation to plain English: The game as currently imagined stinks, so we’re going back to the drawing board. We can’t explain why we, the game’s developers, didn’t know that it stunk, and instead seemingly needed to wait for scathing alpha test feedback from players — but Occam’s Razor clearly suggests the problem is that decisions at Bungie are made by executives with no taste. ★
Apple executives were a little light on substantial interviews last week, but a good one dropped today — Craig Federighi talking to Federico Viticci on the vast Mac-style windowing overhaul in iPadOS 26: “We don’t want to create a boat car or, you know, a spork”, Federighi begins. Seeing the confused look on my face, he continues: “I don’t know if you have those in Italy. Someone said, “If a spoon’s great, a fork’s great, then let’s combine them into a single utensil, right?” It turns out it’s not a good spoon and it’s not a good fork. It’s a bad idea. And so we don’t want to build sporks”. [...] By and large, one could argue that Apple has created one such convertible product with the iPad Pro, but Federighi strongly believes in the Mac and iPad each having their own reasons to exist. “The Mac lets the iPad be iPad”, Federighi notes, adding that Apple’s objective “has not been to have iPad completely displace those places where the Mac is the right tool for the job”. [...] I don’t need to ask Federighi the perennial question of running macOS on the iPad, since he goes there on his own. “I don’t think the iPad should run macOS, but I think the iPad can be inspired by elements of the Mac”, Federighi tells me. “I think the Mac can be inspired by elements of iPad, and I think that that’s happened a great deal”. I think Apple has tied itself into knots in the past decade trying to make the iPad more useful to more advanced users without making it resemble the Mac at a superficial level. But it’s been obvious all along that it should resemble the Mac at a superficial level. Apple solved windowing in 1984. Use that. ★
You may recall from my “Siri Is Super Dumb and Getting Dumb” piece back in January that the Dickinson Public Schools District in North Dakota had the rather unfortunate nickname the “Midgets”. Back in March, the school district announced they’d be retiring the nickname, after nearly a century. Last month they announced their new name: the Mavericks. I’m going to call this the best rebranding of the year. ★
Simon Willison, regarding the various rebuttals to “The Illusion of Thinking” research paper (which I linked to here) from Apple’s machine learning team: I thought this paper got way more attention than it warranted — the title “The Illusion of Thinking” captured the attention of the “LLMs are over-hyped junk” crowd. I saw enough well-reasoned rebuttals that I didn’t feel it worth digging into. And now, notable LLM skeptic Gary Marcus has saved me some time by aggregating the best of those rebuttals together in one place! [...] And therein lies my disagreement. I’m not interested in whether or not LLMs are the “road to AGI”. I continue to care only about whether they have useful applications today, once you’ve understood their limitations. [...] They’re already useful to me today, whether or not they can reliably solve the Tower of Hanoi or River Crossing puzzles. Count me in with Willison. I think it’s interesting what constitutes “reasoning”, but when it comes to these systems, I’m mostly just interested in whether they’re useful or not, and if so, how. See also: Victor Martinez’s rebuttal to the most-cited rebuttal. ★
WhatsApp co-founder Jan Koum, back in 2012 (two years before Facebook acquired them for $19 billion, 13 years before this week’s introduction of ads into WhatsApp): Advertising isn’t just the disruption of aesthetics, the insults to your intelligence and the interruption of your train of thought. At every company that sells ads, a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it’s all being logged and collated and sliced and packaged and shipped out... And at the end of the day the result of it all is a slightly different advertising banner in your browser or on your mobile screen. Remember, when advertising is involved you the user are the product. At WhatsApp, our engineers spend all their time fixing bugs, adding new features and ironing out all the little intricacies in our task of bringing rich, affordable, reliable messaging to every phone in the world. That’s our product and that’s our passion. Your data isn’t even in the picture. We are simply not interested in any of it. When people ask us why we charge for WhatsApp, we say “Have you considered the alternative?” ★
These screens make for a useful overview of what Apple thinks the highlight features are in each OS. ★
Aric Toler, a visual investigations reporter for The New York Times, on X back in April: For about a year, I worked with a retired British academic named Alasdair Spark to solve a mystery: where did the original photo from the end of The Shining come from, and where/when was it captured? Last week, we finally found the answer. ★
Nice piece in Fast Company by Zachary Petit: One critical moment came in February 2010, when J. Crew featured Field Notes in its catalog, alongside the retailer’s other “personal favorites from our design heroes.” There was a Timex watch, Ray-Bans, Sperry shoes — “and out of fucking nowhere, Field Notes,” Coudal says. “And when that happened, a lot changed for us.” Coudal says it gave the brand instant credibility — after all, if it was good enough for J. Crew, it was good enough for your store. In time, friends began sending him screenshots of Field Notes in TV shows; he and Draplin would see people jotting notes in them in bars and elsewhere; on the design web, they became an obsession. By 2014, there was even a subreddit dedicated to them titled “FieldNuts.” ★
Fred Lambert, writing for Electrek: Bloomberg has just released an embarrassingly bad report about the self-driving space, in which it claimed Tesla has an advantage over Waymo by misrepresenting data. [...] The report compares Tesla’s and Waymo’s self-driving efforts, going so far as to claim that “Tesla is closer to vehicle autonomy than peers.” Right off the bat this smells fishy, given that Waymo is actually operating self-driving taxis in several cities, and Tesla ... is not. Steve Man, the Bloomberg Intelligence analyst behind the report, based his report on Tesla’s own quarterly misleading “Autopilot Safety Report.” The report is widely considered to be unserious for several main reasons: Tesla bundles all miles from its vehicles using Autopilot and FSD technology, which are considered level 2 ADAS systems that require driver attention at all times. Drivers consistently correct the systems to avoid accidents. Tesla Autopilot, which is standard on all Tesla vehicles, is primarily used on highways, where accidents occur at a significantly lower rate per mile compared to city driving. Tesla only counts events that deploy an airbag or a seat-belt pretensioner. Fender-benders, curb strikes, and many ADAS incidents never appear, keeping crash counts artificially low. Finally, Tesla’s handpicked data is compared to NHTSA’s much broader statistics that include all collision events, including minor fender benders. Trusting Tesla’s own safety report is like saying, “Elon Musk says Tesla is ahead, so they must be ahead.” ★
Eli Tan and Mike Isaac, reporting for The New York Times: On Monday, WhatsApp said it would start showing ads inside its app for the first time. The promotions will appear only in an area of the app called Updates, which is used by around 1.5 billion people a day. WhatsApp will collect some data on users to target the ads, such as location and the device’s default language, but it will not touch the contents of messages or whom users speak with. The company added that it had no plans to place ads in chats and personal messages. (a) I’ve never once looked at the Updates tab in WhatsApp; (b) does anyone believe they’re going to put ads in the other tabs sooner or later? ★
Todd Spangler, Variety: Meanwhile, the Trump Mobile “47 Plan” is pricier than the unlimited plans from prepaid services operated by Verizon’s Visible, AT&T’s Cricket Wireless and T-Mobile’s Metro, which are each around $40 per month. The Trump T1 Phone, which runs Google’s Android operating system, will cost $499. It features a 6.8-inch touch-screen with a 120 Hz refresh rate. The smartphone also has a “fingerprint sensor and AI Face Unlock,” according to the company’s website. Reps for Trump Mobile didn’t respond to an inquiry about what company is manufacturing the Android phone. The Wall Street Journal, “Trump’s Smartphone Can’t Be Made in America for $499 by August”: A spokesman for the Trump Organization said in an email that “manufacturing for the new phone will be in Alabama, California and Florida.” Despite the language in the press release, Eric Trump indicated that the first wave of phones wouldn’t be built here. “You can build these phones in the United States,” the Trump son told podcaster Benny Johnson on Monday morning on The Benny Show after holding up a gilded device that looked just like an Apple iPhone. “Eventually, all the phones can be built in the United States of America. We have to bring manufacturing back here.” The Journal goofs, bigly, by claiming that the T1 “shows some specs that would beat Apple’s biggest, priciest iPhone models”. The T1 specs are so idiotic that one of them claims “5000mAh long life camera”, conflating battery capacity with (I guess?) focal distance. The Verge, “The Trump Mobile T1 Phone Looks Both Bad and Impossible”: Where things get especially strange, though, is its supposed combination of Android 15, 5G, and a 3.5mm headphone jack. In many ways, these are opposing specs: Android 15 is generally only available on very recent devices, many cheap phones still don’t support 5G, and almost every phone maker has stopped including headphone jacks with their devices in the last few years. There are a few that have both, but modern phones with a headphone jack are few and far between. And pretty much all made in China. I’ll give them credit for making them available exclusively in gold. That’s on brand. But I’m guessing the quality will be on par with Trump Watches, which is to say, “RUMP”-quality. ★
Recorded in front of a live audience at The California Theatre in San Jose Tuesday evening, special guests Joanna Stern and Nilay Patel join me to discuss Apple’s announcements at WWDC 2025. 3D video with spatial audio: Coming soon, exclusively in Sandwich Vision’s Theater on Vision Pro, available on the App Store. This year’s on-demand version of the show in Theater isn’t ready yet, but it looks really good. Better than last year’s by a long shot, and also significantly better than the bandwidth-constrained livestream Tuesday night. The livestream Tuesday night looked good; the on-demand version coming in a few days looks pretty amazing. Sponsored by: iMazing: The world’s most trusted software to transfer and save your messages, music, files and data from your iPhone or iPad to your Mac or PC. DetailsPro: Design with SwiftUI anytime, anywhere — on iPhone, iPad, Mac, or Apple Vision Pro. Ooni: Next-gen pizza power. The Koda 2 Pro oven features smarter heat, more room, and easier control. Save 10% with code thetalkshow. As ever, I implore you to watch on the biggest screen you can (real, or virtual). We once again shot and mastered the video in 4K, and it looks and sounds terrific. All credit and thanks for that go to my friends at Sandwich, who are nothing short of a joy to work with.
Amanda Silberling, writing at TechCrunch: When you ask the AI a question, you have the option of hitting a share button, which then directs you to a screen showing a preview of the post, which you can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world. When I woke up this morning, I did not expect to hear an audio recording of a man in a Southern accent asking, “Hey, Meta, why do some farts stink more than other farts?” Flatulence-related inquiries are the least of Meta’s problems. On the Meta AI app, I have seen people ask for help with tax evasion, if their family members would be arrested for their proximity to white-collar crimes, or how to write a character reference letter for an employee facing legal troubles, with that person’s first and last name included. Others, like security expert Rachel Tobac, found examples of people’s home addresses and sensitive court details, among other private information. Katie Notopoulos, writing at Business Insider (paywalled, alas): I found Meta AI’s Discover feed depressing in a particular way — not just because some of the questions themselves were depressing. What seemed particularly dark was that some of these people seemed unaware of what they were sharing. People’s real Instagram or Facebook handles are attached to their Meta AI posts. I was able to look up some of these people’s real-life profiles, although I felt icky doing so. I reached out to more than 20 people whose posts I’d come across in the feed to ask them about their experience; I heard back from one, who told me that he hadn’t intended to make his chat with the bot public. (He was asking for car repair advice.) ★
Kashmir Hill, reporting today for The New York Times: Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool. That’s the lede to Hill’s piece, and I don’t think it stands up one iota. Hill presents a lot of evidence that ChatGPT gave Torres answers that fed his paranoia and delusions. There’s none that ChatGPT caused them. But that’s the lede. At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating ideas that weren’t true but sounded plausible. “This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.” Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people. Someone with prescriptions for sleeping pills, anti-anxiety meds, and ketamine doesn’t sound like someone who was completely stable and emotionally sound before encountering ChatGPT. And it’s Torres who brought up the “Am I living in a simulation?” delusion. I’m in no way defending the way the ChatGPT answered his questions about a Matrix-like simulation he suspected he might be living in, or his questions about whether he could fly if he truly believed he could, etc. But the premise of this story is that ChatGPT turned a completely mentally healthy man into a dangerously disturbed mentally-ill one, and it seems rather obvious that the actual story is that it fed the delusions of an already unwell person. Some real Reefer Madness vibes to this. ★
Michael Tsai, “Apple’s Spin on AI and iPadOS Multitasking”: I do want to call out that, in multiple interviews, they are kind of setting up strawmen to knock down. They keep saying that people say Apple is behind in AI because it doesn’t have its own chatbot. To me, Apple has been clear that it has a different strategy, and I think that strategy mostly makes sense. I have never heard someone wish for an Apple chatbot. The issue is that everyone can see that Apple seems behind in executing said strategy, both that features didn’t ship on time and that the ones that did ship don’t measure up to similar features from other companies. Secondly, they seem to be trying to debunk John Gruber’s claim that Apple showed vaporware at the last WWDC. But Apple’s assertion that there was actual, working software doesn’t contradict anything Gruber wrote. He put it at level 0/4 because there wasn’t even a live demo, just a pre-packaged video. If it can’t be demoed to the media in a controlled setting, even calling it “demoware” would be charitable. Wikipedia says, “After Dyson’s article, the word ‘vaporware’ became popular among writers in the personal computer software industry as a way to describe products they believed took too long to be released after their first announcement.” Is that not exactly what happened here? The whole “Siri, when is my mom’s flight landing?” segment of last year’s WWDC keynote definitely wasn’t demoware either. It was never demoed. Whether the feature was actually running, and actually capable of doing what they said it could, even just some of the time along a golden path, doesn’t matter. Even the keynote video didn’t show the actual feature working. It kept cutting away from the iPhone that was purportedly performing the feature back to presenter Kelsey Peterson at every single step. Apple’s internal rules for keynote demos is that the entire feature has to be real, and capturable in a single take of video. I’ve spoken to people who’ve been in keynotes, and many more who’ve done WWDC session video. They’ve got really strict rules about everything being real. That doesn’t mean they always show the feature in a single take in the final cut of the presentation, but it has to be possible, just like it would have to be in a live stage presentation. But that Siri demo in last year’s keynote is almost like a series of screenshots. We never see Peterson speak to Siri and then watch the results come in. There’s not one single shot in the whole demo that shows one action leading to the next. It’s all cut together in an unusual way for Apple keynote demos. Go see for yourself at the 1h:22m mark. I spoke this week, off the record, to multiple trusted sources in Apple’s software engineering group, and none of them ever saw an internal build of iOS that had this feature before last year’s keynote. That doesn’t mean there wasn’t such a build. But none of my sources ever saw one, and they don’t believe there was one, because they’re in positions where they believe that if there had been such a build, their teams would have had access to it. Most rank and file engineers within Apple do not believe that feature existed in an even vaguely functional state a year ago, and the first any of them ever heard of it was when they watched the keynote with the rest of us on the first day of WWDC last year. I’m quite certain Apple’s executives believed this feature could be shipped at some point in the iOS 18 year. It’d be crazy to announce the feature if they didn’t believe they could ship it, and Apple’s executives aren’t crazy. I’m also quite certain that eventually there was a functional implementation of the now-abandoned “v1” of the more personalized Siri, but it was unreliable with no path forward to make it reliable. (I think it was far worse than “not up to Apple’s high standards” — it was clearly unshippable.) But, as Tsai notes, let’s just take Apple’s executives at their word and concede that there was such a build a year ago. It’s still vaporware at this point. Vaporware doesn’t mean “completely fabricated”. It just means “promised but hasn’t shipped”. Tsai links to this Mastodon tweet from Russell Ivanovic: “This narrative that it was vaporware is nonsense”. Craig Apple. My guy. You announced something that never shipped. You made ads for it. You tried to sell iPhones based on it. What’s the difference if you had it running internally or not. Still vaporware. Zero difference. Also, Apple is sticking with the euphemism “in the coming year” for when we can expect to see these next-gen personalized Siri features. Gurman reported today that they’re shooting for next spring. I confirmed with Apple at WWDC that “in the coming year” means “in 2026”. I don’t know why they’re sticking with that euphemistic phrasing, which to many people’s ears makes it sound like in the next 12 months, which might include this fall. Just say “next year” instead of “in the coming year”. It’s very obvious that this year’s WWDC keynote went back to an underpromise/overdeliver mindset. But “in the coming year” is raising some users’ hopes misleadingly.
Dan Moren, writing this week at Six Colors: But you’ve heard about all of that, I’m sure, so we’re not going to rehash it. Instead, let’s get personal: I’m picking out, in my opinion, the best and worst new features of each of Apple’s platforms. To be clear, these are my completely scientific and totally well-reasoned expert opinions on the features that were announced, not just some off-the-cuff reactions less than a day later. ★
MG Siegler: The underlying message that they’re trying to convey in all these interviews is clear: calm down, this isn’t a big deal, you guys are being a little crazy. And that, in turn, aims to undercut all the reporting about the turmoil within Apple — for years at this point — that has led to the situation with Siri. Sorry, the situation which they’re implying is not a situation. Though, I don’t know, normally when a company shakes up an entire team, that tends to suggest some sort of situation. That, of course, is never mentioned. Nor would you expect Apple — of all companies — to talk openly and candidly about internal challenges. But that just adds to this general wafting smell in the air. The smell of bullshit. ★
Fun episode of Tested with Adam Savage and Norman Chan. The first segment goes deep on what’s new in VisionOS 26. Apple is ignoring the jokes about the platform’s relative obscurity and has obviously been heads-down on building the platform out and up. VisionOS 26 is a huge year-over-year upgrade. Tons of exciting stuff, and so many little things are so much better. The second segment of the show features cohost Norm Chan going backstage at The Talk Show Live From WWDC on Tuesday night, to speak with Adam Lisagor about the production details of the live immersive broadcast in Theater. (The YouTube version of the show is in editing now — we’ll post it as soon as it’s ready. But the immersive version in Theater is available for purchase now.) ★
Jason Snell: After last year, Apple could’ve been forgiven for wanting to soft-pedal this year, Apple Intelligence and regroup. It didn’t do that, nor did it double down on last year. Instead, it’s chosen a middle ground — a bit safe and familiar but also a place where Apple can feel a bit more like itself. In the long run, it needs to get this right. In the short term, maybe it should focus on meeting its users where they are, rather than pretending to be something it’s not. Agree with Snell’s take completely, I do. ★
Stephen Hackett has a list of the Intel Macs that MacOS 26 Tahoe supports, and the ones they’re dropping support for this year. Apple has gone through three CPU architecture transitions in the Mac’s history: 68K to PowerPC starting in 1994 PowerPC to Intel starting in 2006 Intel to Apple Silicon, starting 2020 With the 68K–PowerPC transition, they supported 68K Macs through Mac OS 8.1, which was released in January 1998. With the PowerPC–Intel transition, they only supported PowerPC Macs for two Mac OS X versions, Mac OS X 10.4 Tiger (which initially shipped PowerPC-only in 2005) and 10.5 Leopard in October 2007. The next release, 10.6 Snow Leopard in August 2009, was Intel-only. (Mac OS X dropped to a roughly two-year big-release schedule during the initial years after the iPhone, when the company prioritized engineering resources on iOS. It’s easy to take for granted that today’s Apple has every single platform on an annual cadence.) With next year’s version going Apple Silicon-only, they’ll have supported Intel Macs for five major MacOS releases after the debut of the first Apple Silicon Macs. I think that’s about the best anyone could have hoped for. ★
Tight 7-minute video at the WSJ: Apple’s AI rollout has been rocky, from Siri delays to underwhelming Apple Intelligence features. WSJ’s Joanna Stern sits down with software chief Craig Federighi and marketing head Greg Joswiak to talk about the future of AI at Apple — and what the heck happened to that smarter Siri. ★
Ben Thompson: To that end, while I understand why many people were underwhelmed by this WWDC, particularly in comparison to the AI extravaganza that was Google I/O, I think it was one of the more encouraging Apple keynotes in a long time. Apple is a company that went too far in too many areas, and needed to retreat. Focusing on things only Apple can do is a good thing; empowering developers and depending on partners is a good thing; giving even the appearance of thoughtful thinking with regards to the App Store (it’s a low bar!) is a good thing. Of course we want and are excited by tech companies promising the future; what is a prerequisite is delivering in the present, and it’s a sign of progress that Apple retreated to nothing more than that. ★
I’ve got iOS 26 installed on a spare phone already, and I like the new UI a lot. In addition to just plain looking cool, Apple has tackled a lot of longstanding minor irritants. For example, the iOS contextual menu for text selections — the one with Cut/Copy/Paste. For years now there have been a lot of other useful commands in there, including “Share…” at the very end. But to get to the extra commands, you had to tediously swipe, swipe, swipe. Now, with one tap you can expand the whole thing into a vertical menu. Elegant. There’s some stuff in MacOS 26 Tahoe I already don’t like, like putting needless icons next to almost every single menu item. But overall my first impression of Liquid Glass on MacOS is good too. It’s fun, and lots of little details are nice — joyful and useful in an old-school Mac way. ★
Stephen Hackett, noting the biggest news of the day: Something jumped out at me in the macOS Tahoe segment of the WWDC keynote today: the Finder icon is reversed. […] The Big Sur Finder icon has been with us ever since, and I hope Apple reverses course here. I’m obviously joking about this being the biggest news of the day, but it really does feel just plain wrong to swap the dark/light sides. The Finder icon is more than an icon, it’s a logo, a brand. ★
With WWDC25 bringing the biggest design overhaul since iOS 7, you’ll want to prototype your new interfaces fast. DetailsPro lets you build real SwiftUI layouts directly on your iPhone — no Mac required, no code needed. Mock up your WWDC-inspired designs during coffee breaks. Export clean SwiftUI code straight to Xcode when you’re ready. While everyone else is still thinking, you’re already building. Free to use, with pro features if you need them. Perfect for the design renaissance. ★
Filipe Espósito, in a scoop for 9to5Mac all the way back in October: 9to5Mac has learned details about the new project from reliable sources familiar with the matter. The new app combines functionality from the App Store and Game Center in one place. The gaming app is not expected to replace Game Center. In fact, it will integrate with the user’s Game Center profile. According to our sources, the app will have multiple tabs, including a “Play Now” tab, a tab for the user’s games, friends, and more. In Play Now, users will find editorial content and game suggestions. The app will also show things like challenges, leaderboards, and achievements. Games from both the App Store and Apple Arcade will be featured in the new store. Even before Mark Gurman corroborated this report last week, I’ve had a spitball theory about what it might mean. Perhaps this about more than having one app (Games) for finding and installing games, and another (App Store) for finding and installing apps. It could signal that Apple is poised to establish different policies for apps and games. Like, what if games still use the longstanding 70/30 commission split (with small business developers getting 85/15), but non-game apps get a new reduced rate? Say, 80/20 or even 85/15 right off the top, with small business developers and second-year subscriptions going to 90/10? Having separate store apps for apps and games would help establish the idea that games and apps are two entirely different markets. ★
Scharon Harding, writing at Ars Technica: “Just disconnect your TV from the Internet and use an Apple TV box.” That’s the common guidance you’ll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we’ve consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers. But how private are Apple TV boxes, really? Apple TVs don’t use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use — could those apps provide information about you to advertisers or Apple? In this article, we’ll delve into what makes the Apple TV’s privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever. tvOS is perhaps Apple’s least-talked-about platform. (It surely has orders of magnitude more users than VisionOS, but VisionOS gets talked about because it’s so audacious.) But it might be their platform that’s the furthest ahead of its competition. Not because tvOS is insanely great, but it’s at least pretty good, and every other streaming TV platform seems to be in a race to make real the future TV interface from Idiocracy. It’s not just that they’re bad interfaces with deplorable privacy, it’s that they’re outright against the user. ★
Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar, from Apple’s Machine Learning Research team: Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. [...] Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counterintuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low-complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities. The full paper is quite readable, but today was my travel day and I haven’t had time to dig in. And it’s a PDF so I couldn’t read it on my phone. (Coincidence or not that this dropped on the eve of WWDC?) My basic understanding after a skim is that the paper shows, or at least strongly suggests, that LRMs don’t “reason” at all. They just use vastly more complex pattern-matching than LLMs. The result is that LRMs effectively overthink on simple problems, outperform LLMs on mid-complexity puzzles, and fail in the same exact way LLMs do on high-complexity tasks and puzzles. ★
Mark Gurman, in his eve-of-WWDC Power On column at Bloomberg: The Liquid Glass interface is going to be the most exciting part of this year’s developer conference. It will also be a bit of a distraction from the reality facing Apple: The company is behind in artificial intelligence, and WWDC will do little to change that. Instead, Apple is making its successful operating system franchise more capable and sleek — even as others move on to more groundbreaking AI-centric interfaces. Perhaps the first major hint that Apple was moving toward fluidity in the UI was the Dynamic Island, which doesn’t merely expand and contract as it changes shape, but rather appears to flow, with a pleasant viscosity. The best analogy for Apple right now might be the car industry. Apple produces the best gas cars on the road (its operating systems) and is making them even more upscale. It has rolled out a hybrid (Apple Intelligence), but it’s struggling to make a true all-electric vehicle (unlike companies such as OpenAI and Alphabet Inc.’s Google). This is such a terrible analogy. If you buy an EV, you use it instead of your old gas-powered car. There’s nothing from OpenAI or Google that allows you to not use a conventional device — phone, tablet, or PC. The only way to use ChatGPT, or Gemini, or Google’s rather amazing Veo 3 video generation tool, is using a phone or computer running iOS, MacOS, Android, Windows, or Linux. Gurman’s analogy would only work if the way you got around in an EV was to put it in the back of a gas-powered flatbed truck. Gas-powered vehicles are probably going away. I sure hope they do. But cars and trucks aren’t going away. A better analogy is that AI is doing to today’s dominant OSes what web browsers did to Windows in the late 1990s. They’re adding new interactive layers atop the old. Windows didn’t go away. Microsoft still makes tons of money from Windows today. But Windows’s primacy as a platform went away. And: Microsoft pivoted quickly in the face of Netscape and the web’s threat, and created Internet Explorer, which squashed Netscape, and became, for at least a decade, the preeminent web browser. It was essential for Apple to create Safari/WebKit for Mac OS X to thrive. If Apple hadn’t succeeded with WebKit on Mac OS X they wouldn’t have had their own first-class web rendering engine to adapt for a 3.5-inch touchscreen in 2007. The iPhone without the real web wouldn’t have been the iPhone. And the only reason the original iPhone had the real web is that Apple owned and controlled Safari and WebKit. What Apple, I think, needs for iOS and MacOS is the AI equivalent of what Safari and WebKit were for the web two decades ago. The oft-cited Cook Doctrine says “we need to own and control the primary technologies behind the products we make.” 25 years ago it was obvious that web browsers and rendering engines were primary technologies. Apple certainly couldn’t afford back then to continue to be dependent upon Microsoft for the Mac version of IE, nor on open source cross-platform browsers like Firefox that would never feel native on the Mac (or, more importantly, on future Apple platforms). But Safari and WebKit were, if you think about it, late. They were announced at Macworld Expo in January 2003 (just five months after the debut of this website). Netscape’s blockbuster IPO was in August 1995, over seven years prior. The entire dot-com bubble and bust took place before Safari shipped. The Mac, and thus Apple, made do with non-Apple browsers in those intervening years — browsers that were all some mix of non-native clunky UI, slow, incompatible (with Windows IE), ugly (e.g. IE text rendering on Mac OS X), and often downright unstable. (And application crashes on classic Mac OS would often bring down the entire system.) The concern for Apple today is that they’re in trouble if it takes six or seven years for them to get to their Safari/WebKit moment for AI. Things are moving faster with AI today than they were with the web in the 1990s. At the peak of Netscape mania in 1995, there were many who believed Netscape would topple Microsoft. At the time Netscape founder Marc Andreessen proclaimed that Netscape would reduce Windows to “a poorly debugged set of device drivers.” That obviously didn’t happen. But perhaps not just a but the reason why that didn’t happen is that Microsoft quickly built and shipped a better browser than Netscape’s. They didn’t just build a browser into Windows, they built a better browser into Windows. And they made a better browser for the Mac too. If it had taken Microsoft until 2003 (when Apple debuted Safari) to ship IE, computing platform history may well be very different. iOS today is the closest to what Windows was circa 1995. iOS doesn’t have Windows’s 95 percent market share, but the iPhone has some sort of monopoly profit share in mobile device sales. And iOS is plainly dominant. That’s why there’s all this Sturm und Drang surrounding Apple’s App Store commissions and iron-fisted control over all iOS software. After the announcement last year of OpenAI as a partner for “world knowledge” in Apple Intelligence — and, a year later, they’re still the only partner — Wayne Ma at The Information reported that Apple wasn’t paying a cent for this integration, and that the plan was for OpenAI to eventually begin paying Apple in a revenue sharing deal: Neither Apple nor OpenAI are paying each other to integrate ChatGPT into the iPhone, according to a person with knowledge of the deal. Instead, OpenAI hopes greater exposure on iPhones will help it sell a paid version of ChatGPT, which costs around $20 a month for individuals. Apple would take its 30% cut of these subscriptions as is customary for in-app purchases. Sometime in the future, Apple hopes to strike revenue-sharing agreements with AI partners in which it gets a cut of the revenue generated from integrating their chatbots with the iPhone, according to Bloomberg, which first reported details of the deal. That sounds a lot like the revenue sharing deal Apple has with Google for search in Safari — a deal (which is at some degree of risk from Google’s own antitrust problems) that now results in Google paying Apple over $20 billion per year for the traffic Safari sends to Google Search. In hindsight, we now know that web browsers, in and of themselves, don’t generate any money directly. Someone was going to give a good one away free and now almost all of them are free of charge. But that doesn’t mean it isn’t essential for a platform to own and control its own browser. Web search, it turns out, is where the money is on the World Wide Web. Not just some money but an almost unfathomable amount of money. Web search is not primary technology for Apple’s platforms. But because they own and control Safari and WebKit, and Safari and WebKit are very good (so that most of Apple’s customers use them), Apple is in a position to profit very handsomely from web search, even though it doesn’t even have a search engine to speak of. Apple’s net annual profit the last few years has been around $95 billion. If we assume Google’s $20B/year traffic acquisition revenue sharing payments to Apple are mostly profit, that means somewhere between 20–25 percent of all Apple’s profit comes from that deal. So are LLMs more like browsers (platforms need to own and control their own, but they won’t make money from them directly) or like web search (dominant platforms like Apple’s don’t need their own, but Apple can profit handsomely by charging for integration with their platforms)? I think the answer is somewhere in between. Browsers are essential to personal computing platforms because they run on-device. Web search isn’t essential to own and control because it runs in the cloud, but exists only to serve users running devices. LLMs run both locally and in the cloud. If it takes Apple as long to have its own competitive LLMs as it did to have its own competitive web browser, I suspect they’ll soon be paying to use the LLMs that are owned and controlled by others, not charging the others for the privilege of reaching Apple’s platform users. No simple analogy captures this dynamic. But the threat is palpable. I will say, though, “Liquid Glass” sounds cool.
Mark Gurman, in his eve-of-WWDC Power On column at Bloomberg: The Liquid Glass interface is going to be the most exciting part of this year’s developer conference. It will also be a bit of a distraction from the reality facing Apple: The company is behind in artificial intelligence, and WWDC will do little to change that. Instead, Apple is making its successful operating system franchise more capable and sleek — even as others move on to more groundbreaking AI-centric interfaces. Perhaps the first major hint that Apple was moving toward fluidity in the UI was the Dynamic Island, which doesn’t merely expand and contract as it changes shape, but rather appears to flow, with a pleasant viscosity. The best analogy for Apple right now might be the car industry. Apple produces the best gas cars on the road (its operating systems) and is making them even more upscale. It has rolled out a hybrid (Apple Intelligence), but it’s struggling to make a true all-electric vehicle (unlike companies such as OpenAI and Alphabet Inc.’s Google). This is such a terrible analogy. If you buy an EV, you use it instead of your old gas-powered car. There’s nothing from OpenAI or Google that allows you not to use a device running a conventional OS. The only way to use ChatGPT, or Gemini, or Google’s rather amazing Veo 3 video generation tool, is using a phone or computer running iOS, MacOS, Android, Windows, or Linux. Gurman’s analogy would only work if the way you got around in an EV was to put it in the back of a gas-powered flatbed truck. Gas-powered vehicles are probably going away. I sure hope they do. But cars and trucks aren’t going away. A better analogy is that AI is doing to today’s dominant OSes what web browsers did to Windows in the late 1990s. They’re adding new interactive layers atop the old. Windows didn’t go away. Microsoft still makes tons of money from Windows today. But Windows’s primacy as a platform went away. And: Microsoft pivoted quickly in the face of Netscape and the web’s threat, and created Internet Explorer, which squashed Netscape, and became, for at least a decade-long stint, the preeminent web browser. It was essential for Apple to create Safari/WebKit for Mac OS X to thrive. If Apple hadn’t succeeded with WebKit on Mac OS X they wouldn’t have had their own first-class web rendering engine to adapt for a 3.5-inch touchscreen in 2007. An iPhone without the real web wouldn’t have been the iPhone. And the only reason the original iPhone had the real web is that Apple owned and controlled Safari and WebKit. What Apple, I think, needs for iOS and MacOS is the AI equivalent of what Safari and WebKit were for the web two decades ago. The oft-cited Cook Doctrine says “we need to own and control the primary technologies behind the products we make”. 25 years ago it was obvious that a web browser and rendering engine were primary technologies. Apple certainly couldn’t afford back then to continue to be dependent upon Microsoft for the Mac version of IE, nor on open source cross-platform browsers like Firefox that would never feel native on the Mac (or, more importantly, on future Apple platforms). But Safari and WebKit were, if you think about it, late. They were announced at Macworld Expo in January 2003 (just five months after the debut of this website). Netscape’s blockbuster IPO was in August 1995, over seven years prior. The entire dot-com bubble and burst took place before Safari shipped. The Mac, and thus Apple, made due with non-Apple browsers in those intervening years — browsers that were all some mix of non-native clunky UI, slow, incompatible (with Windows IE), ugly (e.g. IE text rendering on Mac OS X), and often downright unstable. (And application crashes on classic Mac OS would often bring down the entire system.) The concern for Apple today is that they’re in trouble if it takes six or seven years for them to get to their Safari/WebKit moment for AI. Things are moving faster with AI today than they were with the web in the 1990s. At the peak of Netscape mania in 1995, there were many who believed Netscape would topple Microsoft. At the time Netscape founder Marc Andreessen proclaimed that Netscape would reduce Windows to “a poorly debugged set of device drivers”. That obviously didn’t happen. But perhaps not just a but the reason why that didn’t happen is that Microsoft quickly built and shipped a better browser than Netscape’s. They didn’t just build a browser into Windows, they built a better browser into Windows. And they made a better browser for the Mac too. If it had taken Microsoft until 2003 (when Apple debuted Safari) to ship IE, our computing platform history may well be very different. iOS today is the closest to what Windows was circa 1995. iOS doesn’t have Windows’s 95 percent market share, but the iPhone has some sort of monopoly profit share in mobile device sales. And iOS is plainly dominant. That’s why there’s all this Sturm und Drang surrounding Apple’s App Store commissions and iron-fisted control over all iOS software. After the announcement last year of OpenAI as a partner for “world knowledge” in Apple Intelligence — and, a year later, they’re still the only partner — Wayne Ma at The Information reported that Apple wasn’t paying a cent for this integration, and that the plan was for OpenAI to eventually begin paying Apple in a revenue sharing deal: Neither Apple nor OpenAI are paying each other to integrate ChatGPT into the iPhone, according to a person with knowledge of the deal. Instead, OpenAI hopes greater exposure on iPhones will help it sell a paid version of ChatGPT, which costs around $20 a month for individuals. Apple would take its 30% cut of these subscriptions as is customary for in-app purchases. Sometime in the future, Apple hopes to strike revenue-sharing agreements with AI partners in which it gets a cut of the revenue generated from integrating their chatbots with the iPhone, according to Bloomberg, which first reported details of the deal. That sounds a lot like the revenue sharing deal Apple has with Google for search in Safari — a deal (which is at some degree of risk from Google’s own antitrust problems) that now results in Google paying Apple over $20 billion per year for the traffic Safari sends to Google Search. In hindsight, we now know that web browsers, in and of themselves, don’t generate any money directly. Someone was going to give a good one away free and now almost all of them are free of charge. But that doesn’t mean it isn’t essential for a platform to own and control its own browser. Web search, it turns out, is where the money is on the World Wide Web. Not just some money but an almost unfathomable amount of money. Web search is not primary technology for Apple’s platforms. But because they own and control Safari and WebKit, and Safari and WebKit are very good (so that most of Apple’s customers use them), Apple is in a position to profit very handsomely from web search, even though it doesn’t even have a search engine to speak of. Apple’s net annual profit the last few years has been around $95 billion. If we assume Google’s $20B/year traffic acquisition revenue sharing payments to Apple are mostly profit, that means somewhere between 20–25 percent of all Apple’s profit comes from that deal. So are LLMs akin to browsers (platforms need to own control their own, but they won’t make money from them directly) or to web search (platforms like Apple’s don’t need their own, and Apple can profit handsomely by charging for built-in integration with their platforms, because their platforms are dominant)? I think the answer is somewhere in between. Browsers are essential to personal computing platforms because they run on-device. Web search isn’t essential to own and control because it runs in the cloud, but exists only to serve users running devices. LLMs run both locally and in the cloud. If it takes Apple as long to have its own competitive LLMs as it did to have its own competitive web browser, I suspect they’ll soon be paying to use the LLMs that are owned and controlled by others, not charging the others for the privilege of reaching Apple’s platform users. No simple analogy captures this dynamic. But the threat is palpable. I will say, though, “Liquid Glass” sounds cool.
Kyle Hughes, in a brief thread on Mastodon last week: At work I’m developing a new iOS app on a small team alongside a small Android team doing the same. We are getting lapped to an unfathomable degree because of how productive they are with Kotlin, Compose, and Cursor. They are able to support all the way back to Android 10 (2019) with the latest features; we are targeting iOS 16 (2022) and have to make huge sacrifices (e.g Observable, parameter packs in generics on types). Swift 6 makes a mockery of LLMs. It is almost untenable. This wasn’t the case in the 2010s. The quality and speed of implementation of every iOS app I have ever worked on, in teams of every size, absolutely cooked Android. [...] There has never been a worse time in the history of computers to launch, and require, fundamental and sweeping changes to languages and frameworks. The problem isn’t necessarily inherent to the design of the Swift language, but that throughout Swift’s evolution Apple has introduced sweeping changes with each major new version. (Secondarily, that compared to other languages, a lower percentage of Swift code that’s written is open source, and thus available to LLMs for use in training corpuses.) Swift was introduced at WWDC 2014 (that one again) and last year Apple introduced Swift 6. That’s a lot of major version changes for a programming language in one decade. There were pros and cons to Apple’s approach over the last decade. But now there’s a new, and major con: because Swift 6 only debuted last year, there’s no great corpus of Swift 6 code for LLMs to have trained on, and so they’re just not as good — from what I gather, not nearly as good — at generating Swift 6 code as they are at generating code in other languages, and for other programming frameworks like React. The new features in Swift 6 are for the better, but, in a group chat, my friend Daniel Jalkut described them to me as, “I think Swift 6 changed very little, but the little it changed has huge sweeping implications. Akin to the switch from MRR to ARC.” That’s a reference to the change in Objective-C memory management from manual retain/release (MRR) to automatic reference counting (ARC) back in 2011. Once ARC came out, no one wanted to be writing new code using manual retain/release (which was both tedious and a common source of memory-leak bugs). But if LLMs had been around in 2011/2012, they’d only have been able to generate MRR Objective-C code because that’s what all the existing code they’d been trained on used. I’m quite certain everyone at Apple who ought to be concerned about this is concerned about it. The question is, do they have solutions ready to be announced next week? This whole area — language, frameworks, and tooling in the LLM era — is top of mind for me heading into WWDC next week. ★
Thomas Ptacek: LLMs can write a large fraction of all the tedious code you’ll ever need to write. And most code on most projects is tedious. LLMs drastically reduce the number of things you’ll ever need to Google. They look things up themselves. Most importantly, they don’t get tired; they’re immune to inertia. Think of anything you wanted to build but didn’t. You tried to home in on some first steps. If you’d been in the limerent phase of a new programming language, you’d have started writing. But you weren’t, so you put it off, for a day, a year, or your whole career. I can feel my blood pressure rising thinking of all the bookkeeping and Googling and dependency drama of a new project. An LLM can be instructed to just figure all that shit out. Often, it will drop you precisely at that golden moment where shit almost works, and development means tweaking code and immediately seeing things work better. That dopamine hit is why I code. Ptacek says he mostly writes in Go and Python, and his essay doesn’t even mention Swift. But the whole essay is worth keeping in mind ahead of WWDC. There is no aspect of the AI revolution where Apple, right now today, is further behind than agentic LLM programming. (Swift Assist, announced and even demoed last year at WWDC, would have been a first step in this direction, but it never shipped, even in beta.) ★
I don’t use the web interface to Movable Type, my moribund-but-works-just-great CMS, very often. But I was using it today and noticed something odd. Next to the small-text metadata that says I’ve written 35,086 entries in total, it said I had one draft. One. I don’t use the drafts feature in Movable Type — my drafts are stored locally as text files in BBEdit or unpublished posts in MarsEdit. I didn’t recall ever saving a draft in Movable Type, but, I thought to myself, I probably did it from my phone — which is the one device where I do publish and edit posts through the MT web interface because (to my knowledge) there’s no equivalent of MarsEdit for iOS. It was a Linked List post pointing to Bob Lefsetz’s reaction to the then-new Beats acquisition by Apple for $3 billion, which was considered a lot of money for an acquisition at the time. The blockquote wasn’t fully Markdown-formatted yet — which is sort of tedious for me on the phone, but a single keyboard shortcut in either BBEdit or MarsEdit on my Mac. That’s probably why I left it as a draft. So, just now, I finished the formatting, and changed it from draft to published. Voila — a post I wrote on 1 June 2014 that hadn’t been published until a few minutes ago. I suspect many of you will think Lefsetz’s 2014 remarks on Tim Cook ring more true today than they did then. Lending strong credence to my theory that this forgotten draft was created on my phone is that 1 June 2014 was the Sunday before WWDC 2014, when I’d have been travelling, and thus using my phone for posting. Funny coincidence that I happened to notice it today, on the cusp of WWDC 2025. ★
From his family, on Atkinson’s Facebook page: We regret to write that our beloved husband, father, and stepfather Bill Atkinson passed away on the night of Thursday, June 5th, 2025, due to pancreatic cancer. He was at home in Portola Valley in his bed, surrounded by family. We will miss him greatly, and he will be missed by many of you, too. He was a remarkable person, and the world will be forever different because he lived in it. He was fascinated by consciousness, and as he has passed on to a different level of consciousness, we wish him a journey as meaningful as the one it has been to have him in our lives. He is survived by his wife, two daughters, stepson, stepdaughter, two brothers, four sisters, and dog, Poppy. One of the great heroes in not just Apple history, but computer history. If you want to cheer yourself up, go to Andy Hertzfeld’s Folklore.org site and (re-)read all the entries about Atkinson. Here’s just one, with Steve Jobs inspiring Atkinson to invent the roundrect. What a man, what a mind. ★
A brief follow-up to my love letter to Apple’s discontinued MagSafe Battery Pack this week. I wrote: They’re the only Lightning devices left in my life and they’re so good I’m happy to still keep one Lightning cable in my travel bag to use them. Among its other unique bits of cleverness, Apple’s MagSafe Battery Pack supports another cool feature: when attached to your phone, you can plug the charging cable into the phone, and after the phone gets to 100 percent charge, the phone will recharge the connected battery pack. So, if you own a MagSafe Battery Pack, you can recharge it even if you don’t have a Lightning cable handy. Just attach it to your iPhone and plug your USB-C cable into the phone, not the battery pack. I’m not aware of any other battery packs that support this. That said, I still keep that one Lightning cable in my travel bag for the MagSafe Battery Pack because I want to be able to charge it whenever I want. Like, say, if I want to leave it behind, recharging, while I go elsewhere with my iPhone. Also, I like using the MagSafe Battery Pack as my bedside MagSafe charge, including in hotels. I like being able to check my phone from bed without worrying about a cable. In fact, I use one of my MagSafe Battery Packs as my bedside charger at home, not just while travelling. Such a great little device. Really hope they make a sequel. ★
WhatsApp, on their official blog back in April 2023: Last year, we introduced the ability for users globally to message seamlessly across all their devices, while maintaining the same level of privacy and security. Today, we’re improving our multi-device offering further by introducing the ability to use the same WhatsApp account on multiple phones. A feature highly requested by users, now you can link your phone as one of up to four additional devices, the same as when you link with WhatsApp on web browsers, tablets and desktops. Each linked phone connects to WhatsApp independently, ensuring that your personal messages, media, and calls are end-to-end encrypted, and if your primary device is inactive for a long period, we automatically log you out of all companion devices. When I wrote about WhatsApp finally shipping for iPhone earlier this week, I mentioned that you couldn’t use a secondary phone as a linked device to your primary phone. That used to be true, but obviously, I missed that this changed two years ago. Glad to know it. I’ve already added my Android burner and my spare iPhone that I use for summer iOS betas. WhatsApp has a support document on linking devices that explains the somewhat hidden way you do this with a secondary phone. My thanks to several readers who pointed me to this. This makes it seem all the more spiteful, though, that Meta didn’t allow the iPhone version of WhatsApp to run on iPads (like they do with the still-iPhone-only Instagram app). I heard from a little birdie this week — second- or maybe even third-hand, so take it with a grain of salt — that Meta had this WhatsApp for iPad version ready to go for a while, and has been more or less sitting on an iPad version of Instagram, as negotiating chits with Apple. Negotiating for what, I don’t know. But if that’s true, perhaps some (but definitely not all) of the ice has thawed between the two companies. I don’t see it happening, but it sure would get a big audience response if Instagram for iPad gets some sort of announcement during the WWDC keynote, perhaps as part of “iPadOS is now a fuller, more complete, computing experience than ever” segment. One other oddity I encountered, when adding my Android phone as a linked device: by design, there is no way to sign out of WhatsApp on your primary iOS or Android device. If you are signed in to WhatsApp using another phone number, the only way to sign out on that device and then set it up as a linked device to your primary WhatsApp account is to delete WhatsApp from your phone and reinstall it. Weird. ★
您可以订阅此RSS以获取更多信息