> Hands up who knows which company designed the iPhone 6s’ 64-bit chip
Apple did. It's been widely acknowledged that the processor inside the iPhone is a custom design which implements the ARMv8 ISA. See: https://en.wikipedia.org/wiki/Apple_A9
> Furber and his co-designer, Sophie Wilson, had found research from the Berkeley campus of the University of California into a new type of processor: one that simplified the set of instructions it would follow, in order to enable a sleeker, more efficient design. This style of processing was called “reduced instruction set computing”, or Risc, and the Berkeley Risc designs had been put together by just two people, David Patterson and Carlo Sequin.
ARM were hardly the first people to bring a RISC CPU to market: MIPS, SPARC, and POWER are RISC architectures which pre-dated ARM.
> By 2007, Intel had abandoned the excesses of the company’s Pentium line
Pentium is a brand name, which still exists today. What the author is referring to is Intel's "NetBurst" architecture.
> powering the servers at companies like PayPal.
Citation needed. The CEO of X-Gene said PayPal has “has deployed and validated” services based on their CPU, but readily admits to having shipped only 10,000 units, which is just peanuts to datacentre guys. Source: http://www.theregister.co.uk/2015/04/29/aookied_micro_q4_201...
Overall, I'm happy for ARM's continued success, and I hope that they and the rest of the semiconductor industry will continue to innovate and bring the performance per watt up.
That being said, this Guardian article is crap. It's factually incorrect in places, missing sources for claims, and generally doesn't do anything to actually explain why people supposedly don't know who ARM is.
tl;dr - Some guys in Cambridge, who are fab-less, made a CPU from a RISC design. They license the designs to partners who can add whatever they want around the CPU (or redesign the CPU as long as they conform to the instruction set, as Apple does). The reason they're "not well known" is because they don't sell CPUs themselves, companies like Apple, Qualcomm, MediaTek, etc do.
> ARM were hardly the first people to bring a RISC CPU to market: MIPS, SPARC, and POWER are RISC architectures which pre-dated ARM.
Minor point: ARM-1-based devices were kinda, sorta, brought to market around '85-'86. That's neck-and-neck with MIPS I. Real ARM-2 based machines in '87 were three years ahead of the RS/6000.
Of course, lots of people were developing RISC architectures in the early 80s, in parallel.
Well from memory Intel sold off the general purpose ARM CPU division to Marvell back in 2006 or so. Though it kept the ARM chips that were specialised for storage or networking IIRC. So technically Intel has always made ARM chips in one form or another, just not any that slot into a mobile phone.
Based on what another commented has said it seems Intel bought the ARM license from DEC, and given that they sold off only the consumer ARM chips - keeping the task specific ones - it makes sense in a way. It was so they wouldn't have any internal competition or divided focus away from their low power x86 chips - the Atom's.
It was neck and neck, in my opinion. In the UK, ARM ruled, but in the US, the MIPS Magnum series were well received and widely used. MIPS had a chance to be an ARM, but it was squandered.
Just guessing, but it could have something to do with ARM not doing actual hardware. Instead they license either chip designs or the ISA to anyone willing to pay.
In a way it is reminiscent of the z80 clones that was all over the place in the 80s.
Early ARM designs have group load/store commands. This goes against RISC philosophy. You cannot call ARM a RISC because of that. It is a CISC with simplified decoder.
I designed chips for living, despite being software engineer. Including, and not limited to, CPUs. In the design process I had to account for optimizing compilers and possible high-performance and low-energy versions.
So I am very sensitive to the misuse of "RISC trade mark".
Ideal RISC design is very easy to develop. It can be easily tailored to various application domains, including high-performance (OoO) and low-energy. RISC instruction semantics should be simple, all implementation details are hidden. In my opinion, there are only three RISC designs - DEC Alpha, IBM Power and RISC-V. Others contain various design issues that show their inner cogs and that had to be accounted in the designs different from the original one.
(in my MIPS implementation experience I've implemented 33% of MIPS commands in the first month and spent another month accounting for branch delay slot)
ARM contains three deviations from ideal RISC: user-visible PC (read of PC returns PC of command+8!), multi-cycle group load/store commands and condition codes. All three "features" make it difficult to design something different in performance than original ARM.
>pple did. It's been widely acknowledged that the processor inside the iPhone is a custom design which implements the ARMv8 ISA
Citation needed. Different from Anand's mentioning of "custom designed implementation of ARM ISA" on his webpage without any proof and further copied and taken to Wikipedia.
One non-verified source cannot be trusted, sorry.
Of course the chip is a custom design ( as it mentioned @ Wikipedia ), because ARM generally do not provide the peripherals and you can "bolt" whatever you like.
Apple may be a big company, but they definitely lack the skill set to do such large chips themselves. The wild rumor goes to their attempt to ditch BRCM for wifi's chips for ages, something that would have happened if they had the resources to design something themselves. Not to mention the huge chunk of chip on the new macbook 12", rumored to be from the acquisition of Anobit, which if you have some knowledge of design is at least 4 generations behind.
That of course is a speculation, much like Anand's quote of "custom ISA implementation" from A8 and onward, but at least the nm process can be seen to be quite old.
> One non-verified source cannot be trusted, sorry.
I don't work for Apple, and if I did I would value my job more over satisfying your need for a citation. But anyway, since I cannot prove or disprove that Apple did design their own chip, which they did, I will point out that NVidia, a much smaller company, has managed to do just what I claimed, see: https://en.wikipedia.org/wiki/Project_Denver
> Apple may be a big company, but they definitely lack the skill set to do such large chips themselves.
Really? I think their purchase of a processor design company in 2008 speaks to the fact that they do have this talent in house: https://en.wikipedia.org/wiki/P.A._Semi
PA Semi designed their own POWER CPU, which they developed from scratch. Don't tell me these people (who work for Apple now) don't have the capabilities and resources to design a processor which can implement the ARM ISA.
Either provide a citation that proves that Apple hasn't designed their own core implementing the ARM ISA, or stop demanding some from my claim, which has a lot more evidence behind it.
The fact that they have dual sourced CPUs from TSMC and Samsung, which are on different processor nodes, thus requiring a different layout and mask, tells me that Apple's hardware engineers absolutely know what they're doing, and can design the internals of a chip:
I honestly don't know why I put so much time into refuting this. Many other people on the internet think that Apple is designing their own core inside the iPhone, how else would a dual core chip smoking the living shit out of Samsung and Qualcomm's quad and octo core processors?
There is also an interesting story of the way they were propelled to success, pre-smartphone. I think it went something like Apple needed a chip for the Newton. That begat ARM as an Acorn spin-off. Then Nokia picked it up for GSM phones that needed a proper CPU. Nokia would have been enough to take ARM to a billion unit per year before iPhone. Apple has strict power requirements, and Nokia provided continued focus on power management. Is that about right?
It wasn't clear if Annapurna was still going to make chips available to other companies. Their premise certainly is very interesting and it would be a bummer to be just Amazon specific
Another related British company which has become very successful without anyone noticing is Imagination Technologies (formerly Videologic).
They had a successful graphics card business during the 90s but with the introduction of 3D graphics accelerators their product (PowerVR) whilst semi-successful began to struggle against 3dfx and nVidia's TNT and Geforce range of graphics cards.
Eventually they pulled out of the standalone graphics market and pottered around producing speakers and radios. They had a few sucesses like winning the Sega Dreamcast bid from 3dfx but I believe it wasn't until the need to have powerful and low power graphics chips in phones that the PowerVR's very efficient rendering technology pushed Imagination back into the fore. Their graphics chips nowadays can be found powering most smartphones and tablets.
Very interesting company and a very interesting past.
PowerVR was a interesting product, much like the 3dfx comeptition, it was a "pass-through" design in that it only did the 3D part and you needed a video card alongside it to get anything else.
The difference between PowerVR and 3dfx was that while the latter used a a short VGA cable that went from the video card to the 3dfx card, and then you plugged the screen into the 3dfx, the PowerVR passed everything around on the PCI bus (very similar to what Nvidia Optimus is doing on laptops).
But then Nvidia came to market with the TNT series that did everything on a single board, and they could do it in 32-bit while 3dfx was limited to 16-bit, and ATI got involved where before they had only done 2D stuff.
BTW, 3dfx introduced SLI to the gaming world. This while still using the physical pass through VGA cable. Made for quite the cable salad.
I wouldn't call Imagination Technologies a successful company their IP is being shipped in billion of SOC's but they yearly revenue is below 200M and they operate at break even or a loss.
IT seems to be a case study of a company which it's IP is a corner stone in many consumer devices but some how never managed to capitalize on it.
ARM is a very good example of how to spin off your IP and make it successful even when your actual business (Acorn) is going bust.
The article is interesting and I think that the title and the closing sentence aren't doing it justice.
It is the default for any company that produces non end-user electronics to stay in the background. Most people don't know who build their RAM chips, or the capacitors on their motherboards or even who built their motherboard. How many people even on HN know the brand of their laptop's HDD or RAM?
Intel is the exception, in that they recognized early the value of a brand name and did everything they could to make theirs known. It is hard for newer generations to understand, but at the dawn of the PC era no one cared much about the CPU brand. Intel gave discounts to manufacturers who used the “intel inside” sticker along with many other promotional actions in order for their product to become important in the eyes of the consumer.
I'm a fan of ARM for it's power / performance characteristics, and for the market competition it's bringing for Intel. (who needs it)
One concern I have is the overall expectations on ARM are more locked down than Intel. Look at Microsoft and it's ARM Tablet. Locked down, Win 8 App Store only. Android is largely locked down, unless you jailbreak, or get a development / standalone PC board or system.
The more prominent ARM devices do not offer the flexibility and openness your average Intel PC does.
Legacy software is a big part of what I see as keeping that door open. This latest round of Intel PC's is notable for the lack of documentation and or options to configure them to boot other things. [1] Not like it can't be done, but it's painfully obvious the only reason it can be done is the vendor left the option there.
They don't have to, and I get the feeling nobody really does want to. They just feel it makes sense right now for legacy software reasons. At some point, that equation will change...
[1] - Case in point, a recent HP consumer grade laptop. Got it, and it had Windows 8 on it, which at the time didn't make any sense for the use case it was purchased for. There was no meaningful documentation on how to enter the BIOS to set "LEGACY" mode. I literally poked around on the keys, until I found it. Scary. The idea of, "they didn't even have to provide that" hit home right then.
With Intel skipping a 'tick' and fabs like Samsung and TSMC catching up in process, ARM-based CPUs are competitive in performance with the low end of Intel's desktop lines up to Core i5 - and use less power.
There seems to be a window for ARM to make inroads into servers, in addition to low-end laptops like Chromebooks.
The trend of mobile devices replacing desktops for more and more tasks (> 50% of emails read on mobile, only 10% of people desktop-only) bodes poorly for Wintel.
One could foresee an era where one can add a keyboard and big screen to a mobile device like an iPad Pro, and most folks no longer use desktops at all. Phones migrate up to replace desktops, PCs migrate up to the cloud.
Allot of the "performance" of most of these SOC's actually comes from the GPU/DSP portion of the SOC rather than the pure CPU.
So far it seems that Intel's bet was correct it will take them just as much time to reduce x86 (Skylake/Core-M, Bay-Trail/Cherry Trail/Willow Trail) power consumption to ARM SOC levels as it would take ARM SOC's to come close to x86 performance, and Intel will still win at the end.
Keep in mind that in 2007-2008 Intel was very seriously considering licensing ARM and other technologies to compete in the ultra portable and mobile markets and we are all better off that it didn't do so.
Intel now has offerings which are better or comparable to ARM based SOC's without fragmenting the ARM ecosystem even further and with having another avenue of technologies and intellectual property to keep the competition going and to offer an alternative to ARM/RISC with both the ability to streamline transition and having a fall back option which is always needed.
An all ARM ecosystem is just as bad (or even worse since unlike x86 it doesn't guarantee compatibility) as a monopolized x86 one.
We should also be quite thankful to ARM and it's users as they have been the driving force behind much of Intel's work lately as AMD was not really offering competition since the old Athlon-X64 days.
The new Core-M7 offers 2 cores at a boost clock of 3.1ghz, and a 1ghz GPU and beat every ARM SOC out there in terms of performance especially in places where it actually counts while having a TDP of 4.5W which is comparable to high end SOC's like Apple's A9x/A8x and considerably lower than the NVIDIA high end offering although that one blows everything out of the water when it comes to graphical performance as it has desktop GPU core's in it.
ARS isn't loading for me, yes the a9x is a monster chip especially as far graphics and synthetic benchmarks goes, try real world applications (quite hard to near impossible on Apple devices due to the closed ecosystem) including web server performance, video compression throttling and many more aspects and you'll see that's not the case.
It's very competitive with Core-M devices for sure but sorry not sold out on it being competitive against Core-I devices on any level besides maybe graphical performance so far.
As far as Intel making apple SOC's and? Intel is selling modems to Apple, and it wants to fab SOC's for them as well.
Since Apple is fabless and Intel has the most advanced SC production line at the time it nailed 14nm and it's pushing for 10nm next year for launch in 2017 I don't see this being indicative of anything, Intel tested the water with ARM almost 8 years ago and it decided to continue with x86, and we've seen quite interesting things coming from them Core-M, Xeon-D, and even things like Intel Edison which showed that x86 can be scaled to pretty much every use at this time while still being very interesting and competitive in terms of both price, performance, and power consumption.
We've been hearing this gloom and doom about "Wintel" for almost 2 decades now, linux is going to beat them, no wait apple power PC, no ARM, sigh it's getting old.
Intel isn't going anywhere soon and it has enough resources to push through and their technology and where they can take it so far does not seem to be obsolete on any scale.
Well, Linux did beat them, in two important segments, server and mobile (if you count Android as Linux). They still have enough market share for comeback though.
> Keep in mind that in 2007-2008 Intel was very seriously considering licensing ARM and other technologies to compete in the ultra portable and mobile markets and we are all better off that it didn't do so.
Actually, Intel sold a range of ARM chips from 1998-2006. It gained the DEC StrongARM design team when Compaq bought DEC and Compaq didn't want it. Intel then sold the XScale range of ARM chips, before deciding that x86-compatibility was the right way to go. That decision led to the Atom range.
Intel sold its ARM operation to Marvel for $600 million.
Intel still has an architecture license so it can design ARM chips if it wants.
You're only looking at 32 bit ARM CPUs. 64 bit ARM is quite literally a different beast, and (for example) the 48 core Cavium that I have access to at work[1] is easily comparable to an Intel Xeon.
The server focussed SOCs shipping currently (none of which are listed in those benchmark links) can have much better performance than Atom. There's a broad spectrum of ARM cores for different markets, and different optimisations that can be applied to SoC design decisions to target a part at different markets.
The OP was talking about mobile not server SOC's but fine....
The server SOC's also have a power consumption several times greater.
So if that's what you want lets look at the Xeon-D performance wise they are very competitive to Intel's discrete CPU's, and beat the AppliedMicro X-Gene 2.4 quite harshly...
http://www.anandtech.com/show/9185/intel-xeon-d-review-perfo...
The X-Gene 1 was a development platform designed to work out the software bugs (and it's also obsolete now, since it was first produced in 40nm in 2013). Obviously if you compare latest Intel SoC with some 2+ year old dev-kit, the latest Intel looks great! They should compare Xeon with Cavium ThunderX, X-Gene 2/3, AMD Seattle, Qualcomm's latest server SoC or the HiSilicon D0x.
Well the Xeon-D 1540 sits just below the E5 in most benchmarks, unfortunately I can't conjure benchmarks out of think air, but you don't seem to be able to do so either (but for all accounts Cavium isn't shipping anything yet, all i could find are press releases that some one said they'll be building servers based on their SOC bot nothing else).
I wasn't comparing anything, anantech was they compared an available Intel Xeon-D SOC against a commercially available HP micro-server that was launched 6 months prior so either HP chose to use an "obsolete development platform that was designed to work out software bugs" or that what was available for them at the time.
Again I wasn't saying that ARM CPU's are bad, I was saying that anyone that is saying that they currently can beat Intel on either price/performance or performance/wattage across the board is delusional.
Delusional?? 12 billion ARM processors were sold in 2014. Intel sold 400 million. I wouldn't bet against those volumes if ARM partners become serious about the server market. It's quite clear that 64 bit ARM processors are good enough to use for some servers, with some very high end offerings coming along soon as I've posted about in other comments. Disruptive innovations happen from below.
We (Red Hat) have one for testing, but I can only say limited things about it[1]. So I guess call Cavium and have a very solid business case for it :-) It seems they may be available for sale, but I have no idea how much they cost or what their availability is.
It's not just the latest Intel SoC, it's a Xeon on a bleeding-edge process node with cutthroat pricing, both of which are rather uncommon for Xeons. It's clear that ARM has got Intel spooked in the low-end server arena.
Then how come nobody in the Android ecosystem is able to build a successful device with Intel? Even after Qualcomm had a very bad ARM chip for the last 2 years and Apple is beating their pants off nobody, from Sony to LG to Samsung, is willing to go with Intel.
I think it's just the extra power options arm has. Scaling ARM speed just means writing to the PLL register of the CPU. Power usage is pretty much linear with frequency and being able to go to 3Mhz one core from 2Ghz quad via the OS is excellent. x86 now has dynamic core and frequency speeds but it's not OS driven, the CPU does it on its own. Understandable since you can't just add such features to the ever compatible x86. So x86 can't be forced to scale down by the OS which is helpful on a phone (screen off = user won't notice apps running slow).
Nothing inherent to the power/performance of the cpu itself. It's just that arm can and does get extra power saving features tacked on as required.
Paypal using ARM server? I didn't know that.
I would actually love someone explain to me why ARM server is an attractive option. When Intel launched the recent Xeon-D i thought there isn't really any incentive to go to ARM CPU server.
As with most things I think the success of ARM has less to do with its chips or architecture and more to do with its business model and the competition.
For decades the combined power of Intel's volume and Window's ubiquity kept a huge amount of resources dedicated to that platform. SPARC, M68K, NS32, VAX, PA-RISC, even Itanium were crushed under the unrelenting focus by third parties on building tools, software, and systems around x86 and later AMD64 architecture chips.
What is fascinating is that Intel got into that position by being open, there were no fewer than 12 licensees for its 8086 design, and people had supplanted "expensive, proprietary lock-in" type architectures with more open and cheaper chips. It was the emergence of the PC market, and the great Chip Recession of 1984, where Intel decided if it was going to stay a chip maker, it had to be the best source of its dominant computer chips. I was at Intel at the time and it shifted from partnering, to competing, with the same people who had licensed its chips, with the intent of "reclaiming" the market for CPU chips for itself.
You have to realize that at the time the bottom had fallen out of the market, and things like EPROMs and DRAM (both of which Intel made) were being sold on the grey market at below market costs as stocks from bankrupt computer companies made it into the wild. Further competitors like Ok Semiconductor were making better versions of the same chips (lower power, faster clock rates). Intel still had a manufacturing advantage but it could not survive if it couldn't make the margins on its chips hold. It dumped all of its unproductive lines, wrapped patents and licenses around all of its core chips, and then embarked on a long term strategy to kill anyone who wouldn't buy their chips from Intel at the prices that Intel demanded.
We can see they were remarkably successful at that, and a series of CEOs have presided over a manufacturing powerhouse that was funded by an unassailable capture of not only software developers but system OEMs as well. They fended off a number of anti-trust lawsuits, and delicately wove their way between former partners like Compaq who were now laying on the ground, mortally wounded.
ARM was playing in the embedded space, dominated by the 8051 (an Intel chip) where Intel played the licensing card (just like ARM) licensing its architecture to others who would make their own versions of the chips. As a licensing play they insured their partners would never move "up market" into the desktop space and threaten the cash cow that was x86.
The relentless pace of putting more transistors into less space drove an interesting problem for ARM. When you get a process shrink you can do one of two things, you can cut your costs (more die per wafer), or you can keep your costs about the same and increase features (more transistors per die). And the truth is you always did a bit of both. But the challenge with chips is their macro scale parts (the pin pads for example) really couldn't shrink. So you became "pad limited". The ratio of the area dedicated to the pads (which you connected external wires too) and the transistors could not drop below the point where most of your wafer was "pad". If it did so then you're costs flipped and your expensive manufacturing process was producing wafers of mostly pads so not utilizing its capabilities. At the Microprocessor Forum in 2001 the keynote suggested that spending anything more than 10% of your silicon budget on pads was too much. 90+% of your die had to be functional logic or the shrink just didn't make sense.
The effect of that was that chips ARM designed really had to do more stuff or they were not going to be cost effective on any silicon process with small feature sizes. And the simplest choice is to add more "big processor" features or additional peripherals.
So we had an explosion of "system on chip" products with all sorts of peripherals that continues to this day. And the process feature size keeps getting smaller, and the stuff added keeps growing. The ARM core was so small it could accommodate more peripherals on the same die, that made it cost effective and that made it a good choice for phones which needed long battery life but low cost. The age of phones put everything except the radios on chips (radios being like modems, different for every country, were not cost effective to add to the chip until software defined radio (SDR) became a thing. And the success as a phone platform pushed the need for tools, and the need for tools got more of the computer ecosystem focussed on building things for the ARM instruction set.
At that point step two became inevitable. Phones got better and better and more computer like, they need more and more of the things that "desktop" type computers need. You have a supplier (ARM) which is not trying to protect an entrenched business basically doing all it can to widen its markets. And a company like Apple, who wasn't trying to protect its desktop/laptop market share pushing the architecture as far as it can. More tools, more focus, more investment from others to support it, and like a fire that starts as a glowing ember near a convenient source of tinder, the blaze grows until the effects of the fire are creating its own wind and allowing it to grow bigger and stronger. Even after Intel woke up to the fact that the forest around their x86 architecture was on fire, I don't think they had enough time to put it out.
So here we are with ARM chips which are comparable in software support and feature set of Intel's low end desktop CPUs. But without the Intel "tax" which is the extra margin Intel could demand being the only player, and immune to Intel's ability to attack by patents or license shenanigans. Intel is in full on defense, paying tablet vendors like Lenovo to use their chips in ARM tablets, supporting the cost of building out their own IoT infrastructure with Galileo, and doing all they can to keep ARM out of their castle, the data center. Like DEC and its VAX line, or Sun and its SPARC line, they are doomed.
Looking at the performance of the iPad pro it is pretty clear you can build a chromebook or a laptop that would meet the needs of the mass market with an ARM architecture machine. And because ARM licensees can add features anywhere in the architecture including places like the frontside bus[1] which is tightly controlled space in x86 land, you will be able to provide features faster than x86 OEMs can convince Intel they need them. And that will change things in a pretty profound (and I think positive) way. Not the least of which might be having the opportunity to buy a lap top that isn't pre-backdoored by the chip manufacturer with its SMM.
[1] Literally if you buy a bus analyzer (a sophisticated logic analyzer) from Agilent or Tektronix and hook it to the Intel frontside bus, it won't display the signals until you enter the NDA # you got from Intel! That is pretty tightly controlled.
The success of ARM's strategy is based on them being an impartial supplier of designs that facilitate low power consumption and some degree of interoperability at the assembler level.
If anyone buys it, they kill the goose that lays the golden egg. All those corporations run their own vertically integrated stack to some extent -- if you were Apple, would you be willing to trust an ARM design licensed from MS or Google? (Or Intel, when the whole point of Apple's CPU strategy going back 22-25 years is to be mostly CPU-independent after getting burned twice in a row by MC680x0 and then PowerPC)? If you were Samsung, would you license an ARM design from Apple-owned ARM? And so on.
If you're apple, and you're licensing ARM designs, can you risk Google or Microsoft, or Oracle buying ARM? Can you risk them changing the licensing terms, or stop licensing altogether?
How come none of them want to be the first to buy, securing their rights to ARM, and instead enjoy the risk of a competitor buying ARM?
Given ARMs dominance of the mobile processor industry for any large tech company to buy them would invite the close scrutiny by anti-trust authorities that none of the major tech companies are particularly keen to attract.
Since ARM is happy to license their designs to all & sundry, what would be the benefit to any individual company of buying them out? Little to none: Even if you want to invest in your own CPUs and add your own special sauce (like Apple) then you can do that much more cheaply than the cost of buying out the company by simply buying the appropriate licenses. You can’t buy them and lock out your competitors because the anti-trust regulators would have a fit. I’m sure every large tech company has looked at the pros & cons and decided that the status quo is by far the best option for them & that the same arguments apply to their competitors.
Intel would attract antitrust actions, and would also rather continue to fight than admit failure by buying in tech.
Apple have historically changed processors a lot, and their hardware side is notoriously a cloud of outsourced factories and not owned by Apple; something of a virtual organisation.
Google have shown little interest in chip design or indeed hardware; remember, Android is a Java VM running on a free operating system on various OEM hardware.
Apple did have a very significant stake in ARM - 20% from memory. It has been argued that the investment saved Apple, as they were able to sell the shares for a significant return at a time when Apple was struggling financially.
Probably no one thought it was worth it, right up until everyone (particularly shareholders) thought that if they held out for X years it would be a company right up there with IBM, Intel, Apple, MS, Google at the top of the tech food chain.
I think "without anyone really noticing" should really mean "without anyone in the general public really noticing". Competitors and people working in the chip industry have probably been well aware of the ARM architecture and its rise for a long time. In addition, some factors that are responsible for the huge success of this architecture have probably been historical accidents as well, in the sense that a different architecture might have been capable of taking ARMs place, but the adoption of ARM by some large tech companies and the explosion of the smartphone market made things move really fast in favor of that platform.
In general I think that the press and the general public only becomes aware of these things long after they have reached a dominant market position and are employed almost universally. It's probably sure that even now many companies that are almost unknown to the general public are working on technologies and products that will change entire markets in the future.
> I think "without anyone really noticing" should really mean "without anyone in the general public really noticing".
I think that's implied when a mainstream news outlet writes about it, and not some tech magazine.
> In general I think that the press and the general public only becomes aware of these things long after they have reached a dominant market position and are employed almost universally.
which seems evident, here. ARM becomes dominant -> Guardian writes about it -> the general public learns about it.
Intel has attempted brand awareness for years with its "Intel Inside" campaign. Stickers, ads, leaflets... They do strong comarketing campaigns with PC OEMs to make sure their brand is somewhat pushed to end customers. In the most unscientific statistic ever, my mother knows what Intel is, but never heard of ARM. The bad news for Intel is that nobody ever is saying "does this device have Intel? If not I'm reluctant to buy", which is possibly what they were aiming at.
It does matter to them, though, because if the computing market turns towards ARM tablets (tablets seem to be what Microsoft is pushing), and they get one without realizing it, they won't be able to run any of the stuff they used to be able to on Windows. Windows has a long history of backwards compatibility and if ARM becomes dominant, it will end up being pretty much pointless. CPU architecture isn't a huge difference when they are compatible, but this time it is a big deal.
Still, it's not something the GenPop will know or care about. From their POV, it's a product issue - this tablet does or does not run some software. It's easy to paper over 99% of those differences by... writing new software. I mean, people don't complain that iPad doesn't run vanilla MS Office.
Which boils down to the old "consumers view computers as black boxes" argument. The question is whether we should care about that or not, and whether the pain of not being able to run GenPop's favorite software grows too big. Being locked into the respective device's app store hides that pain to a degree. And whenever it becomes noticable, GenPop rather considers switching to a different vendor than considering the tech in question. As long as people put band-aids around the architecture, I'm not sure why any consumer would want care about the underlying hardware in question.
Thats because the computer as a mix of hardware and software is something fairly new. The home/personal computer didn't really happen until the 80s. Before then every device had a defined purpose, and if there was any software it was living as "firmware" inside the hardware.
Damn it, i still recall when people got hot and bothered about getting updates for their Nokia Series 40 phones over the air. Before than you only got it done if it was obviously broken, and to do so you brought it to the service desk of a nearby store or some such.
Personally I like using my Windows laptop-tablet hybrid for old games from GoG, and I'm sure a fair amount of other people run old programs that aren't going to be updated for new architectures on these.
> In general I think that the press and the general public only becomes aware of these things long after they have reached a dominant market position and are employed almost universally.
Exception being when it affects the journalists directly, as is the case with how the media followed Apple closely even when they were a bit player in the computer world. This because media production had for better or worse de-facto standardized on Apple.
Thus when Apple branched out into the consumer electronics world, they wrote about it. This even though Apple was hardly the first, and their offering very much wedded to the Mac.
I recommend watching "Micro Men", a drama/documentary depicting the early days of Acorn and the battle with Sinclair for producing the BBC's official microcomputer.
Apple did. It's been widely acknowledged that the processor inside the iPhone is a custom design which implements the ARMv8 ISA. See: https://en.wikipedia.org/wiki/Apple_A9
> Furber and his co-designer, Sophie Wilson, had found research from the Berkeley campus of the University of California into a new type of processor: one that simplified the set of instructions it would follow, in order to enable a sleeker, more efficient design. This style of processing was called “reduced instruction set computing”, or Risc, and the Berkeley Risc designs had been put together by just two people, David Patterson and Carlo Sequin.
ARM were hardly the first people to bring a RISC CPU to market: MIPS, SPARC, and POWER are RISC architectures which pre-dated ARM.
> By 2007, Intel had abandoned the excesses of the company’s Pentium line
Pentium is a brand name, which still exists today. What the author is referring to is Intel's "NetBurst" architecture.
> powering the servers at companies like PayPal.
Citation needed. The CEO of X-Gene said PayPal has “has deployed and validated” services based on their CPU, but readily admits to having shipped only 10,000 units, which is just peanuts to datacentre guys. Source: http://www.theregister.co.uk/2015/04/29/aookied_micro_q4_201...
Overall, I'm happy for ARM's continued success, and I hope that they and the rest of the semiconductor industry will continue to innovate and bring the performance per watt up.
That being said, this Guardian article is crap. It's factually incorrect in places, missing sources for claims, and generally doesn't do anything to actually explain why people supposedly don't know who ARM is.
tl;dr - Some guys in Cambridge, who are fab-less, made a CPU from a RISC design. They license the designs to partners who can add whatever they want around the CPU (or redesign the CPU as long as they conform to the instruction set, as Apple does). The reason they're "not well known" is because they don't sell CPUs themselves, companies like Apple, Qualcomm, MediaTek, etc do.