News & Interviews

Edge Executive Insight – Dr. Maurits Kaptein, CEO and Co-founder , Scailable – Innovator of the Year FINALIST

In the lead up to Edge Computing World, we’re taking some time to speak to key Executives from the leading companies. Today we’re talking with Dr. Maurits Kaptein, CEO and co-founder of Scailable.


Tell us a bit about yourself – what led you to get involved in the edge computing market and Scailable

After spending more than a decade developing AI and statistical learning methods as an academic researcher, Robin van Emden and I started Scailable because we were struck by how inefficiently AI / ML models and pipelines are often deployed. This inefficiency leads to high costs, high energy consumption, and in many cases drives developers of AI / ML models to move data to the cloud with inherent privacy and security concerns. Yes, for training AI models one needs a lot of data and a lot of computing power (GPUs, etc.). However, when using trained models (inference) the situation is quite different: models can often be highly optimized, and low level implementations that are tailored to specific edge hardware allow for using fairly complex models on even fairly “small” (i.e., with relatively little CPU) devices. This saves energy, costs, and ensures privacy. We set out to make highly efficient deployment of trained AI / ML models as simple as possible: basically we want to enable data scientists—those that are able to train models—to efficiently deploy AI / ML pipelines to selected hardware without any additional engineering on the edge device. We currently have a patentend process for transforming any AI/ML pipeline to a highly efficient process on a selected target device which is securely decoupled from the surrounding processes on the device.

What is it you & your company are uniquely bringing to the edge market?

We bring fully modular, highly optimized, no-code, edge AI/ML deployment. Any AI / ML pipeline that can be represented as a computational graph can be uploaded to our platform, and from there users can deploy the pipeline to any device running the Scailable AI manager (see, and

On device the user solely has to configure the input sensors and output protocol, and we will ensure the pipeline is optimized for the target device. We work closely with hardware manufacturers to ensure that our on-device runtimes are fully optimized for our selected targets, and through our platform we allow for the massive, controlled, and secure deployment of new pipelines to thousands of devices. Users and resellers of our platform maintain their own libraries of AI/ML pipelines (and we provide a number to get started) which means that solutions that used to be expensive and time-consuming to build (such as privacy preserving visitor tracking, ANPR, or product quality inspection) can now simply be configured within minutes.

Tell us more about the company, what’s your advantages compared to others on the market, who is involved, and what are your major milestones so far?

We are backed by (venture) capital from Volta Ventures, BOM, and Rabobank. Founded in 2020, we are currently an 8 person team. Major milestones include over 1M in funding, the release of our AI manager pre-installed on Advantech routers, signing of partnership agreements with value added resellers in The Netherlands and beyond, and the use of our technology to power over 12 different edge AI solutions—often created by partners—that are currently, at scale, deployed in the field.

How do you see the edge market developing over the next few years?

We strongly believe in a powerful ecosystem to provide solutions for end-cuostmers. We see that where currently many startups and scale-ups focus on bringing individual edge solutions from start to end (hardware, software, staging, maintenance, etc.) this model is untenable for larger scale deployments. The edge AI ecosystem will split up into hardware providers, Value Added Resellers, System integrators and Machine Builders, AI/ML model builders (and modeling platforms), and AI/ML model management and deployment. We provide a platform for the latter which we think will be the go-to edge AI deployment method in years to come..

What are the main trends you see about integrating edge computing into different verticals?

Similar to the previous answer; we expect specialization across the whole chain with different parties in the ecosystem taking up their unique positions. In our view, developing an AI/ML model is a very different activity compared to deploying and maintaining a model on a selected piece of hardware. Model optimization, optimization for the target hardware, and model management which keeps hardware constraints in check throughout is a special skill, and one that we excel at.