Edge Executive Interview – Carl Moberg, Avassa
In the lead up to Edge Computing World, we’re taking some time to speak to key Executives from the leading companies supporting the show. Today we’re talking to Carl Moberg, CTO and Co-Founder at Avassa
Tell us a bit about yourself – what led you to get involved in the edge computing market and Avassa?
CM: I started my work life in the network operations back when you could be both postmaster managing sendmail farms, and responsible for the BGP peering setups at the same time. We ran a fairly large ISP at the time with a limited team, so we naturally built automation into our ways of working from the get-go. While we had server and application automation pretty well solved, we struggled with automating the network. So we started using the same tools for networking as we did for servers and that approach really resonated with me. This observation put me on an automation and orchestration trajectory that took me to a couple of startups focusing on writing for networking vendors service providers to make networks programmable, and I eventually ended up at cisco.
At cisco I observed closely how enterprises were building their automation and orchestration software towards an application-centric worldview, i.e. one where the abstractions are all built around the needs of applications, as opposed to the historically infrastructure-centric focus.
We also saw a rapidly rising need for running applications in distributed edge clouds. And there’s a myriad of reasons for that, including, but not limited to the need for autonomy, resiliency, privacy, regulations, predictable latency, and the explosion of data created at the edge.
Avassa was started as a response to the question of what an awesome automation and orchestration system for applications running across distributed edge clouds look like?
What is it you & your company are uniquely bringing to the edge market?
CM: The team brings lots of hands-on experience in building software systems for managing distributed resources of various kinds. Our engineering roots are in distributed systems, compilers and language design. This makes us somewhat uniquely experienced in taking on hard tasks in emerging problem spaces where a distributed and model-driven approach fits.
And we take on what we believe to be fundamental challenges around what efficient abstractions look like for simple, scalable and secure lifecycle management and observability of containerized applications running in many (hundreds, thousands or more) locations where each location has limited compute capacity.
Tell us more about the company, what’s your advantages compared to others on the market, who is involved, and what are your major milestones so far?
CM: The fundamental advantage of Avassa is that we offer a comprehensive solution that provides a critical set of the features needed for simple and secure management of distributed edge clouds. We do this in a way that allows teams to reuse existing best practices and tools that they already have in place for their on-prem or public cloud operations.
This is in contrast with other solutions that are trying to re-use abstractions from the public cloud that are ill-fitted for distributed environments, and that does it piecemeal leaving the users with massive integration and security challenges.
Our product is ready and available for trials and production use. It is delivered as-a-service and as an on-prem solution.
We are working with a wide variety of users ranging from retail through healthcare to mobile operators. We are also engaging with various kinds of technology and solution partners to make sure that users taking first steps towards deploying a distributed edge cloud has the support they need.
How do you see the edge market developing over the next few years?
CM: The perception we get from conversations we have with users is the perception that operating distributed edge clouds is hard. This is, from our point of view, a result of trying to apply the relatively complex tools designed for the public cloud, but to the distributed domain. We notice then that tools built to manage large applications running in very few locations are vastly different from running small applications in many locations.
As the first wave of experimentation using public cloud and infrastructure-centric tools wind down, we see how users come out with an experience-based idea on what more appropriate solutions look like.
So I believe the users will start forming stronger opinions about what great looks like in the edge market over the coming years. And it will be a mix of existing technologies like containerised applications, with new insights like what distributed observability should to look like in a distributed system.
What are the main trends you see about integrating edge computing into different verticals?
CM: The main trend we see is how general ideas around best practices is eroding the vertical-specific leftovers where each vertical had to invent their own protocols, form factors and tooling. It’s fair to say that some general technologies around packaging (container images), deploying (container runtimes), monitoring and observability (OpenTelemetry) have proven themselves to be universally applicable and will provide a common substrate across verticals.
This will in its turn significantly lower the barrier of entry for application development in slow-moving verticals. Which will lead to smaller and faster-moving application companies gaining the upper hand against incumbents with incentives to keep things “the old way”. Nothing really new under the sun, but it will happen at the edge.