News & Interviews

 

Edge Executive Insight – Richard Terrill, VP Strategy & Business Development, Analog Interference – Rising Star of the Year FINALIST

In the lead up to Edge Computing World, we’re taking some time to speak to key Executives from the leading companies. Today we’re talking with Richard Terrill, VP Strategy & Business Development, Analog Interference

REGISTER NOW

Tell us a bit about yourself – what led you to get involved in the edge computing market and Analog Interference

Prior to joining Analog Inference I was at Blaize, where we pioneered much of the early best practices around AI and Edge Computing, developing the engagement model to take AI out of the data center and to the Edge. But all digital solutions face fundamental challenges getting the right ratio of work/energy/cost, and it was clear to me that high-volume Edge Computing deployments were stalled, awaiting arrival of Edge-capable hardware that can deliver on the long-made promises of AI and Edge Computing. That’s what led me to Analog Inference.

I believe that digital load-store/classic Von Neumann computing solutions, even with superb implementation and execution, will struggle to meet the econometrics necessary to properly unleash the full potential of the Edge. Analog compute-in-memory is an elegant solution to all the existing impediments, and a natural fit for the task. I want to deliver on the promises our industry has made, and Analog Inference represents the best solution.

What is it you & your company are uniquely bringing to the edge market?

We use true analog compute-in-memory to deliver the best results. This includes work/energy, work/cost and work/energy/cost. The advantage that arises from the innovations we have done is multiple orders of magnitude over the rest of the field.

What we uniquely bring to the Edge market is the desired (and elusive) blend of low latency, high-compute capacity, low energy, and low cost. These all come as part of the solution suite, it’s not a case of trading them off against each other. This arises because we’ve taken an entirely new approach to the arithmetic engine for the Edge.

Tell us more about the company, what’s your advantages compared to others on the market, who is involved, and what are your major milestones so far?

Our advantages against the field

  • Absolute best energy efficiency (100 TOPs/W) – 10-100x better than competitors
  • Massive compute density with 200 TOPS in our first chip (at 2-4W power levels)
  • Supports multi-stream high-def video AI analytics in real-time latencies at edge power levels
  • Lowest system cost by eliminating the overhead of digital systems (SRAM, DRAM, etc.)
  • Against other analog compute-in-memory technologies – we operate our flash cells in the deep subthreshold regime. By manipulating the leakage characteristics of the transistors we can attain the absolute lowest possible work/power ratios of any technology.
  • We can deliver all this using a mature, low-cost 40nm process. This means our standard costs and development costs are far lower than our competitors.

Our team

  • Our founder Vishal Sarin pioneered 4-bit/cell flash storage at Micron, and the core lessons there are directly applicable to our use of embedded flash for analog compute-in-memory fabrics.
  • Our hardware staff comprises world-class experts who understand the fundamental challenges associated with compute-in-memory and storage cell technology. Their innovations are successfully mitigating the reliability and accuracy issues sometimes associated with analog CIM.
  • We have a stellar software team that has built multiple design tools for novel programmable architectures
  • Our business team has deep experience in the chip and AI industries, including Nervana, Intel, Blaize, Wave Computing, Micron and the like.
  • Firms investing include Khosla Ventures, TDK and Cambium
  • Individual investors include Andy Bechtolsheim, Atiq Raza and Ajit Medhekar 

Milestones

  • We have done three test chips to improve our core cell technology on key reliability metrics such as cycling & retention, and to ensure reliable productization.
  • We have developed core IP around noise mitigation methods, real time cell drift monitoring/control, temperature compensation schemes and other releated technology enablement innovations.
  • Our software was ready for customer demonstration (alpha) one year before the chips and boards will be available. This is a key achievement to demonstrate focus and results and move toward POC and evaluation stages.
  • Our first chip tapeout will be in October 2022, with engineering samples in Q2-2023
  • We are already starting to take customer networks into our internal evaluation process (alpha tools), well in advance of the hardware availability. This is a testament to the promise of our underlying innovations and capabilities of our planned products.

How do you see the edge market developing over the next few years?

The Edge silicon market is $50B as estimated by multiple analyst firms (we use Omdia as a primary source, supported by others). This is a massive opportunity and has been forecast for some time. The critical next step is for industry to supply the platforms (hardware and software) that match the econometrics demanded by the edge. At the simplest level this means work/energy.

First, when? To this point in time the industry has seen proof-of-concept deployments that show what is possible, but we have not seen broad deployments and adoption. I am convinced that there will be a network effect for AI, and once ‘sufficient’ AI is widely available at the edge I believe we’ll see a remarkable acceleration of innovation and value creation – a classic inflection point. This will all happen when the right class of edge-compatible hardware is available.

Second, where? AI goes everywhere. We challenge anyone to pick a venue, industry, discipline, vocation, profession, or geography to which AI cannot apply. We assert that we can always find valuable applications for Edge AI. Why? Because AI is all about automating human acumen, wisdom, and experience. And all human endeavors profit from human experience. AI is simply automating it.

Third, who? All segments, but in the order of maximum commercial value and improvements: safety & security, retail, safety, manufacturing, education, mobility. I like to say that we don’t need to do demand creation, the demand is already there. I’s simply a matter of bringing together the necessary components that meet the econometrics demanded, and support deployment at massive scales.

Lastly, how fast? I believe we will see an accelerated tech adoption trend, with a huge lever effect of “keeping up with the other guys”. The first firms to adopt AI at scale and thus generate real commercial value (e.g., retail stores that reduce abandoned carts, schools that identify learning deficits in hours not weeks, airlines that board passengers 5% faster) will reap huge rewards. This applies to top-line and bottom-line business factors and spans the whole enterprise. Once the first mover moves, everyone must follow or perish. I believe that adoption acceleration will outstrip even our most optimistic projections. The key is delivering the hardware that meets the budgets. That’s why I joined Analog Inference, to deliver on that promise.

What are the main trends you see about integrating edge computing into different verticals?

Point #1 – requires a coalition to engage. Most AI projects require multiple vendors to fully meet expectations and potential value return. AI Accelerator vendors for the key hardware (like Analog Inference), server vendors to supply the host computer, Independent AI Software Vendors (ISV) that license their application-specific code to run on the hardware, and systems integrators to pull it all together, install and operate it.

Point #2 – Privacy is bigger than we realize. The trends toward regulatory oversight of data locality, liability for disclosure of personal information, and customer resistance to losing control of their data is driving compute out of the cloud and to the edge. Edge computing hardware that can handle high-end AI tasks with low latency, and thus decouple from the cloud and data center will have substantially higher value as a result. This will only increase in importance over time.

Point #3 – programmable broadens adoption. When firms go to adopt AI, they must undergo some transformation of methods and systems, and they want to amortize that adoption effort as broadly as possible. Fixed function or narrowly focused specialist hardware will naturally be limited in this regard. Fully general programmable hardware suffers over-generality, being big, expensive, and slow. The key is the “right amount” of programmability to cover a range of current and next-generation workloads, but without falling to the siren song of being ‘all things to all users’.

Point #4 – software importance cannot be overstated. With programmability comes the need to author software, and it is essential that the tools provided for that purpose are efficient, stable, and appealing. Done right, the compiler software may be the only way most customers need to “touch” the hardware. All Edge AI Hardware startups claims to have a good handle on the software. Few actually do.