Apple might breakup with Intel. And stay single forever.
Apple released its newest line of 13 inch MacBook Pro laptops this week. The key updates were the new magic keyboard which is almost the same as the old ones on my 2014 Mac, making the entire MacBook line free from the terrible butterfly keyboards. The upgrade also bumps the base model SSD capacity to 256 GB and 10th generation Intel processors but only on the higher end 13 inch MacBooks Pros. Yes, there are actually 2 separate laptops bundled under a common name. The two lower-end models still use the 8th gen chips from the last model, a slower old generation memory, and have only 2 ports (including the one used while charging).
Truth be told, I was waiting to buy this one. My mid-2014 model 13 inch MacBook died recently and since my main work machine was an iMac, I decided to use my iPad for portability. The other reason was I saw my friend purchase a top-spec 15 inch MBP with that terrible butterfly keyboard, only to have 2 of the keys coming out of the laptop. Imagine the horror of purchasing a $3200 laptop and then experiencing this within two years!
Now my choice becomes interesting because last month, Apple also released the new MacBook Air with 10th generation Intel chips and updated Magic keyboard. And the mid-tier Air and the lowest end Pro come at the exact same price of 1299.
$1299 Air advantages - 10th gen processor, faster memory, double the storage (512 GB), 10% better-rated battery life, Real function keys, no Touch Bar.
Base model Pro advantages - Old but beefier processor, brighter and better color accuracy display (though both are the same Retina), better-sounding speakers, Touch Bar for those who like them.
There is one problem though. Reviews of the Air point out at a heating issue. Which brings us to the concept of Thermal Design Power (TDP) and mobile devices. TDP was historically the maximum amount of heat that the processor could dissipate under the maximum possible load. Intel disagrees and is notorious for under-mentioning their TDP numbers sometimes by a factor of 2X. Datacenters and desktops have a much higher TDP budget. A lot of research is going into optimizing power and cooling for datacenters, with a claim by Google’s subsidiary DeepMind of having reduced the former’s datacenter power consumption cost by 40%, though which was recently refuted by Google insiders as overblown. Mobile devices do not have a lot of space to put in multiple fans and heat sinks, leading to a tradeoff between laptop size and the maximum amount of computing power that can be put into it without frying the whole thing up. MacBook Air traditionally has used chips with rated TDPs of 10W or below, while the Pro’s have better cooling can afford to have more powerful heat-dissipating chips of upto 30W.
There are two major constraints in making powerful mobile devices. Nobody likes low battery alerts and dead devices, and nobody likes their device to run hot. Power efficiency has been a major concern since the smartphone era. For a microprocessor, the power consumption is directly proportional to the core voltage squared and directly proportional to the switched frequency. Advances in the way processors are being manufactured have led to a significant decline in voltage, while the industry-wide shift from having more frequency to more cores and parallelism and better dynamic frequency scaling has led to a decline in sustained base clock frequencies. There is another major problem. Modern memory chips need to be periodically refreshed so that the capacitors storing the state data do not end up leaking charge and corrupt the information stored. This leads to energy consumption even when the device is essentially doing nothing. The energy cost of transferring data from memory to the processor and then writing it back to the memory exceeds the cost of actual computation in most cases. This is primarily because we have made significant advances in processing speed while the underlying computer architecture has remained the same for the last 50 years or so.
There are unconfirmed reports that Apple is getting fed up with Intel and contemplating a move to build their own laptop chips. While Apple designs their iPhone and iPad processors in-house based on a license from Arm (more on this later, it used to be ARM in caps, now people simply write Arm), it sources their microprocessors for Mac from Intel and thus has to rely on their product update roadmap in order to release a new product. This is because no one wants the same processor from the last year’s model. And Intel is struggling to come out with better value products, while AMD has been zooming past with their Zen series of architecture. The underlying thesis is that since the Apple line of processors is always way ahead of their Android counterparts supplied by Qualcomm, Samsung et al. so much so that their latest A13 processors in the iPhone 11 line reach desktop-level performance in SPEC2006 and Geekbench 5 benchmarks with such a tiny footprint. (Yes, all benchmarks are wrong, but some are useful.) The idea is that given a proper heat dissipation system and bigger chips in a MacBook compared to the small iPhone, Apple can do wonders with their own chips.
Though this move from Intel to their own processor based on Arm is not that simple. Let us start from the beginning. Dude called William Shockley researches on solid-state physics, co-creates the transistor, essentially a digital switch, gets Nobel prize, moves from New Jersey to Palo Alto in the 1950s in order to be near his sick mother and to setup a company called Shockley semiconductors (and in the process creates the ‘Silicon Valley’). There is one hiccup though. Shockley is a terrible manager. This leads eight of his direct hires to leave the company and create a competitor called Fairchild Semiconductor. The octet was later called the ‘Traitorous Eight’. Two of these, Gordon Moore and Robert Noyce leave Fairchild to create the Intel corporation in 1968. Eugene Kleiner later creates the venture capital firm Kleiner Perkins which is perhaps one of the most famous Venture Captial firms having been an early investor in a lot of modern and past tech giants. Another group of early engineers from Fairchild leave to create a company called Advanced Micro Devices (AMD) in 1969. Intel and AMD are two of the largest microprocessor companies that have survived and flourished since then. (To add to this, Shockley’s report on probable casualties in WWII influenced the US to drop the two atomic bombs on Japan to end the war sooner. In his later life he also became a proponent of eugenics.)
How do AMD and Intel even compete against each other? Why cannot Intel make chips for android devices? What does Arm provide that both Apple and Samsung make chips based on their design? The answer lies in what is called Instruction Set Architecture (ISA). When we were kids, we were told that a computer can only understand 1’s and 0’s. This understanding is the work of the processor. To achieve this, companies like Intel publish a specification that a certain sequence of 1’s and 0’s meaning something special. (Though sometimes there are certain sequences of 1’s and 0’s that makeup what is a hidden instruction not published in the big fat manual, and does interesting things) Intel and AMD currently use the same set of ISA called the x86-64 (or AMD64 or Intel64 according to Intel), which is an extension of Intel’s original x86 ISA. The story goes that Intel (and HP) tried to create their own extension to homegrown x86 and failed spectacularly with Itanium, while AMD extensions were well-received putting Intel in this embarrassing position to use the extension developed by AMD, who were till then licensing x86 from Intel. There have been a lot of ISA’s over the years with some like IBM 360 surviving more than 50 years. Mobile processors on both the Apple and Android side mostly use the Arm family of ISAs. More on this later. One of the reasons Intel cannot simply start making mobile chips. ISA changes are almost always backward compatible, things are added but probably not deleted so as to not break older programs. To take advantage of any new feature on a particular new processor say a new vector instruction, you would need to recompile your code with a new compiler that supports the new ISA features. (If you have some time, do lookup Transmeta.) Apple’s switch from Intel to their own Arm based chips, if it happens, would not be the first time such a change happens. Apple switched from Motorola 68k architecture to PowerPC and more recently in 2006 from PowerPC to Intel. Apple had to provide a software called Rosetta in order to make software meant to run on the PowerPC architecture on the new Intel Macs. If Apple makes the transition again to ARM from Intel, at least the reason would be consistent - disappointing progress in the development of new generation of chips.
Now to the Arm story. Who and what is an Arm? Arm is a fabless semiconductor company primarily in the business of licensing their design, ISA, and IP to whosoever need them. Unlike Intel, Arm does not fabricate their own chips. Their current customers include Apple, Qualcomm, and Samsung. It was founded in 1990 as Advanced RISC Machines Ltd. (hence the ARM) but is now simply referred to as Arm. The company is a subsidiary of Softbank since 2016, with Masayoshi Son as its Chairman, the guy you might know as one of the biggest gamblers in the history of earth. Since the smartphone era, ARM ISAs like the ARMv8 are the most widely used architecture in the world. By the way, RISC in the name above stands for Reduced Instruction Set Computer, an ISA classification with simpler and less amount of instructions in contrast to CISC (Complex ISA), where you might even have a single instruction for whole matrix multiplication, wherein in RISC you have to compose it with the provided smaller but faster instructions. This is a design choice with Intel adopting CISC (at least what appears from the outside).
What if someone wants to create their own ISA? Sure. But you need to have your own OS or re-purpose one, create your own compiler so that you or others can write programs for your platform, ask an existing company or create your own to create processors for it and then hope to find customers for it. Crazy right? Not quite. Enter RISC-V (pronounced as ‘Risk Five’). Krste Asanović, a professor of Computer Science at UC Berkeley along with colleagues including David Patterson started a short three-month project to create a new open instruction set in 2010. As with all time estimations, the three months turned into years of effort. What started out as a somewhat academic project soon turned into a full-fledged ISA (or rather a set of ISAs). Thus RISC foundation was born which now has the support of almost all major hardware and tech major. Some of them were initially unhappy, with Arm starting a smear campaign with a website dedicated to it but then stopped. RISC-V allows anyone to build processors using the ISAs. The processors can be closed, proprietary with no need to pay any royalties. As more and more governments want to control the processor supply chain, especially for critical components, having a free open ISA surely helps. Prof Asanović has also co-founded a fabless semiconductor startup called SiFive to make chips based on RISC-V ISA. SiFive is currently worth almost half a billion dollars.
If the ISA is the same then what factor is Intel and AMD even competing on? This is what is called the micro-architecture. Both Intel and AMD, for example, implement almost the same x86-64 architecture but differ in the actual implementation internally. Two processors with the same ISA can have vastly different performances based on the micro-architecture. A processor is a very complex device. In recent times, calling it a processor would be inaccurate. What we have is rather a System-on-a-chip with specialized processing units, integrated Graphics Processing Unit (GPU), and the processing cores. Traditionally, a processor would have a single core, a small number of multiple levels of fast on-chip memory for caching frequently used stuff to reduce the high penalty to always access the main memory. The race at that time was to reduce the size of the transistor and thus increase the number of transistors on the chip and also increase the clock frequency so as the processor could do more work at the same time. Remember that with an increase in transistors and clock frequency, we are effectively increasing the power requirement and heat dissipation of the chip. A phenomenon called Dennard scaling effectively described this as with every generation of processors, if the transistor density doubles (in roughly 18 months - called Moore’s law, now dead), the circuit should become 40% faster with double the transistors and the power drawn remains the same. Unfortunately, not accounting for the increase in power density due to much denser transistors led to Dennard scaling breaking down around 2006-07 and the 40% improvement in performance was no longer possible without burning up. We hit a ‘Powerwall’ of sorts and this started the shift from higher frequency single cores to a multi-core approach.
Apart from the cores count, how the cores are connected, placement, and the hierarchy of caches, there have been several improvements in the modern micro-architectures. Smart people figured out that a lot of the hardware in a chip was unused when a single instruction was using just a part of the chip at a time. This gave rise to pipelining where with some extra hardware support, once an instruction gets past a certain part of the execution, we can schedule another instruction to use the now idle first part of the pipeline. There is one problem though. Programming languages have a construct called conditional statements (if then, else), where based on evaluation of a certain condition is true or false, we execute some instruction or another. This is a challenge in pipelined processors since until we know the result of the execution of the conditional statement, we cannot proceed. Modern processors solve this by not waiting and instead either predicting that a certain branch would be taken based on certain algorithms or simply scheduling an instruction that is not dependent on the outcome of the condition next. In case of a wrong prediction, you can simply revert and execute the correct instruction with some penalty. Sophisticated branch predictors can be right 99% of the time. Interestingly, the branch prediction on AMD Ryzen uses a simple neural network model. Processors also speculate that a certain instruction would be required to be executed in the future and execute them beforehand, at the risk of additional overhead and penalty when they get it wrong, forcing them to perform extra work as a penalty. Most modern processors also employ something called out-of-order execution whereas the name suggests, a user program is not executed in the sequence it was written to execute, rather independent instructions are scheduled and executed as per hardware availability to keep utilization up and then the instructions are re-ordered at the end to give a feel of in-order execution to the user. If you have ever heard of the words Meltdown and Spectre in the past few years, these were a suite of hardware security vulnerabilities that affected almost all modern processors due to the way they handle speculation and prediction.
Back to our laptop problem. There were few quieter releases this week. Microsoft launched the Surface 2 Go. Lenovo launched their Thinkpads with the new Ryzen 4000 series processors which beat the MacBooks and anything else in pure raw power. MacBooks are not perfect. They are not the most powerful, do not have the best battery life in 2020, but yet they still beat the likes of XPS (build issues), Thinkpads(screen, speakers), and Surface(meh, and macOS) in some area or the other.
And yet again, after looking at all available options, I will be buying a Mac.
On COVID contact tracing apps
The debate on COVID contact tracing apps is a double-edged sword. Argue for it and you are labeled as someone stupid who does not understand the future implications of this privacy disaster. Argue against it and you are labeled as one who does not care enough about the lives of people. Though most of the experts point that using smartphones to perform contact tracing would not only be a privacy disaster, but also not even an efficient solution to the problem. Privacy experts such as Bruce Schneier, point that the solution will have a lot of false positives (Bluetooth gets hold of your neighbor next door, even when you are actually separated by walls but a weak signal gets through.) and false negatives (countries like in India where smartphone penetration is still less than half of the population.) Even in developed countries with more than 80% smartphone penetration, getting people to adopt the app has been difficult with Singapore reporting less than 20% of the population using their app. Countries like India have made it a criminal offense to not have the app installed for certain sections of the population.
Which brings us to the other question, how secure the apps really are. You should be very careful with any app that uses both location data and Bluetooth as a proximity sensor, transmitting information to a central location. The approach adopted by Apple & Google has been to provide an API (essentially an interface) that can be used by third-party developers to make an app. They are most probably not going to build an app themselves. The NHS in the UK has open sourced their app code on GitHub for the world to audit. The Singapore government also had plans to open source their COVID app. India’s app possibly has security vulnerabilities according to a partial audit done by a french security professional. The government has denied any such issues and seems to have no plans to make the source code available.
And then there is China. The Chinese government is using as much data as they can to track people's movements during the pandemic even tracking the financial transactions at shops and supermarkets. The government has already started color coding citizens based on their risk level and sharing the information with the police, all without explicit consent.
China wants a new Internet
As you might already know, China is an Internet island. Anything against the government is met with the Great Firewall, a set of tech and legal measures to block any anti-government content. There is no Facebook, Google, or Twitter. Instead, China has a homegrown alternate for each of the services. And one of the most prominent services is WeChat— a super app where you can do basically anything. Chinese tech companies are required by law to share data with the government, making China one of the first legal surveillance states. Facial recognition has become a commodity tech now. Jaywalkers are being tracked on traffic cameras and the fine is automatically deducted from their WeChat account. The Chinese Police have been catching drug smugglers, murderers and gang members based on facial recognition on their activities in public places. China recently went a step further to use the same technology to track the activities of Uyghur muslim minorities. Snooping is a valid use case which a lot of startups are working on. Unfortunately, things do not end here. China’s ambition grows far bigger. The government realises that lot of Chinese people even out of China use WeChat to communicate with family and friends back home. In a report published this week by CitizenLab, an academic research lab based out of Canada, it has been revealed that WeChat conversations entirely between non-chinese users are being used as datasets to train content blocking algorithms by the government.
The internet today is mostly self-governed by a set of regulatory bodies, with much more involvement of US-based commercial and non-profit entities than possibly any other country. China wants to change this. One of the worst things that people have discovered in recent times is that during any civil unrest, the government can simply cut off Internet for a long period of time. In the last decade, this has happened in Hong Kong, China, the middle-east, Africa, and India. Huawei along with the Chinese government is leading efforts to push what they are terming as the ‘New IP’ (IP stands for the Internet Protocol, one of the key components of how the internet works seamlessly with so many different services on top of it.) Though details on this are unclear and look like outright garbage without material, it might be that China is just looking to a feel of how things would play out in case it actually comes up with something significant.
Other Important Stuff
COVID related layoffs continued this week with AirBnB laying off 25% of its staff with an above average severance package. Uber laid off 3700 people and so did Lyft. Zoom acquired Keybase, quite possibly not for Keybase itself but to use the team behind it to make Zoom more secure. Tim Bray, a notable figure in the software space resigned from Amazon due to the way it handled whistleblowers. Things were bad for Amazon since their internal memo of a meeting attended by Jeff Bezos leaked where they were apparently planning a smear campaign against a fired ex-employee for protesting for better warehouse safety facilities. Popcorn Time, the software that lets people stream illegal movies and series torrents like Netflix had a setback when their GitHub repository was taken down due to a DMCA takedown request. Popcorn Time has filed a counterclaim to get the repository reinstated. The repository was fairly popular with over 3.5k stars. Facebook’s SDK which is used by countless mobile apps including Spotify, TikTok, Tinder, and SoundCloud amongst others for tracking you across apps and websites and other things, has a bug that crashed a lot of the iOS apps using it. Dating apps are probably not the very brightest in terms of security with someone finding security vulnerabilities leading to attacks such as logging on your behalf, viewing your private videos and photos, and more. I would not be surprised if there are similar issues in the more popular bunch of them.
The content and coverage of topics should get better over the coming weeks. Looking forward to your feedback. See you next weekend!