The International Solid-state Circuits Conference kicks of this weekend with a series of tutorials starting Saturday. It was previously suggested that covering top tier conferences like ISSCC might be more useful once the technology areas are distilled down to a few of the most currently topical:
- System-on-chip (SoC)
- Machine Learning / Artificial Intelligence (big data)
- Wireless Communications
- Wireline Communications
Of these, you could easily assume that SoC has pride of place at the ISSCC. It is one of those much-anticipated landmark conferences, an annual pilgrimage early each year for many engineers especially circuit designers and chip architects. The trip is just a little shorter this year even for San Francisco Bay Area residents — a quick jaunt from the breakfast table to the home office.
You might suggest that reduction to just a few topic areas puts the wrong emphasis on such a hard-core technical conference where a great many papers are presented regarding very specific circuit designs in diverse technical fields. But the
detailed circuit block designs are critical to one or a number of the top-level fields under consideration and are readily categorized into the umbrellas above.
Furthermore, anyone hacking away at this conference in order to present to a wider audience is unlikely to have anything meaningful to say about a paper like A 365fsrms-Jitter and −63dBc-Fractional Spur 5.3GHz-Ring-DCO-Based Fractional-N DPLL Using a DTC Second/Third-Order Nonlinearity Cancelation and a Probability-Density-Shaping ΔΣM. It goes without saying that this and similar papers are significant achievements and worth reading for anyone closely connected to the field, but personally, I won’t be able to extract anything useful and would be utterly unable to convey it to readers. You will have to get to the message boards to discuss that one.
As for SoC, there are three invited papers at ISSCC 2021, one from each of Microsoft, Baidu, and Nvidia. These were captured in Highlighted Chip Releases: Modern SoC Designs in the advanced program, a chance for the ISSCC committee to recognize current products deemed to be outstanding.
Gaming was not on my list, but it is first on ISSCC’s. The next Xbox chip is interesting, but game consoles are not generating the interest they once did (although in short supply these days).
A couple of other chips fit the bill laid out earlier since they are both designed for the datacenter. (In other words, these are hitting another top-level category.)
Baidu made the organizers’ list for its Kunlun AI processor. As I understand from Marvel’s (not Marvell, another SoC company) Iron Fist, Kunlun is a mythical place, as AI was not that long ago.
AI is just getting started in earnest. Let’s hope the Kunlun’s magic is all the good kind. According to the press kit, the Kunlun server is already deployed for Baidu web search inference tasks. Personally, I like it better when the machines listen to my instructions rather than the other way around. I want my searches to reflect what I intended just like what I actually type on my iPhone. I insist that I do better than the autocorrect does.
I might be left behind in this opinion, though, as inference is a growing market. According to the McKinsey report cited in the ISSCC press kit, inference in the data center is expected to expand to a $10 billion business by 2025.
Rounding out the three highlighted chip releases is the Nvidia A100 datacenter GPU based on the Ampere architecture. The A100 likely takes the prize for the largest SoC of the conference at 826 mm2. Nvidia reports a 1521 times improvement over their previous datacenter GPU for “a range of scientific applications” and up to 2.5 times on inference tasks such as speech recognition.
Nvidia’s chip is the most significant of the three. We are talking about SoC, after all, and it is the monster of the category. That gets my vote, but performance increases of three orders of magnitude or more also deserve honorable mention.
The other angle to these topic areas is to address important industry trends regardless of the level of activity at the conference. In other words, let’s also identify areas that might be lacking in the lineup.
The ISSCC did not separately cover automotive. There were a smattering of submissions addressing technical challenges related to power conversion as applied to automotive, but those are not addressed here.
I found only one paper specifically addressing issues related to driver assistance or the evolution to autonomous vehicles. In one of the earlier sessions, Renesas will present its autonomous driving chip. It is in the processors session and also fits as an SoC. This device achieves 60.4 TOPS and 13.8 TOPS/W air-cooled in a 12nm process meeting requirements for Level-3 autonomous driving (that is, conditional automation, but see this much better explanation).
This paper is interesting because Renesas approaches autonomous driving from the point of view of a company with an automotive electronics pedigree, adapting computing innovations to their historical perspective on the automotive business. On the other side are the high-performance chip companies leveraging their computing expertise to enter the automotive market. Even car makers want a piece of the car chip business. It should be fun to see how it all plays out.
Automotive technology, AI, SoC and a host of other topics come together at ISSCC for a worldwide audience of virtual attendees. Logistically, we need only worry about whether we can receive the streaming data without interruption. Come to think of it, there will be quite a few papers at ISSCC to address those issues too. Something in the communications sessions this year might improve future virtual events before any return to normal.