Intelligence is the ability to adapt to change
Our differences
Zero Dataset (No initial data set is needed)
By building models on-the-fly directly on the device from real live captured data, DavinSy escapes the main vicious circle of any machine learning oriented project.
The common problem to all AI projects is the lack of quality data. This leads to project abortion, acquisition of data in trouble ways, expensive buying of predefined datasets or the need to create heavy, expensive, and not always realist simulators to compensate the data scarcity or data not representative of the reality of the targeted applications environment.
With DavinSy, the dataset is built from real live data captured while your application operates. This dataset is used to validate and qualify your initial virtual model settings. Once it is done, you deploy DavinSy on the field and it becomes alive collecting real quality data and building the exact model needed to solve the very specific problem of each of your users.
Zero code (Low code)
DavinSy is fully autonomous and automatic. It builds models for you and thus prevent you from going through the slow iterative process of designing the correct neural network for your problem.
Datascientist experts are rare, and they are not product oriented. If you add the need of expertises in designing an embedded device, the task of finding such ressource becomes quasi-impossible.
DavinSy does not require any AI or embedded development expertise to be used. It takes in charge all the hustle for you. The only expertise needed is your business expertise to tune DavinSy regarding your use case.
Thanks to its graphical interface, DavinSy Maestro will guide you during all the production phases: design, prototyping, MVP, full-blown product and even maintenance, and this thanks to its Over-The-Air features.
Deterministic and explainable
DavinSy and ITS Deeplomath Augmented Learning Engine (DALE) are deterministic and do not incorporate stochastic elements. DavinSy divide-and-conquer philosophy helps to understand why a decision has been taken. Indeed, a common grievance about deep learning is that it is often a black-box and therefore it is very hard to understand why a decision has been taken.
The design of DavinSy encourages to split large problems into small ones. Having the intermediate results given by those small models allows to understand how the final decision is taken. For instance, if your command is rejected, you can know if it is because DavinSy failed to separate voice from noise or because it did not recognize your voice or because it did not understand what you said. As a result, it is much easier to correct your modular deep AI versus a monolithic do-it-all network.
ENERGY FRUGALITY
By its frugality and its autonomy, DavinSy reduces your energy bill.
Power consumption is one of the main hurdles preventing IoT to go mainstream. Being for the sake of ecology to reduce the carbon footprint, for usability preventing daily reload of batteries, or the need of being plugged to the grid, power frugality is crucial for IoT.
The main consumption budget is due to communications. DavinSy is lean as autonomous and working offline. There is, therefore, no need to maintain the link to the cloud for inferences and not even in training phases.
Furthermore, Deeplomath Augmented Learning Engine (DALE) training cost in equivalent to one iteration of backpropagation in standard deep learning Workflows. So, DavinSy is not only faster than the competition, but it also increases the lifespan of the batteries. On top of this, the created model brings superior level of function-connectivity and is by nature spiking.
SCALABLE COST EFFICIENT SYSTEM
DavinSy acts as an abstraction layer on top of your hardware enabling a large scale scalability. It reduces time to market, the need of rare expertise, the bill of material, maintenance costs, and thus the price of your product.
By essence, AIoT is cost driven. If you want to deploy thousands of products, they must be as cheap as possible.
DavinSy has several attributes that make your product cheaper. First, because it is low code, it reduces the need to hire expensive AI experts. Then, by streamlining the design process and getting data directly from the field, it lowers the design time and saves the cost of data acquisition. Furthermore, because it is frugal and does not need any hardware accelerator, it decreases the bill of material.
Finally, even the maintenance is cheaper. Imagine you deployed a model the standard way and you discover that it does not work in some situation you did not anticipate (specific environments, accents, lingos…). Imagine the support costs that it will generate as well as the cost of going again through the complete cycle of data collection, generating a new model, and deploying it again on the field.
This cannot happen with DavinSy because DavinSy always adapts to the reality.