By Dan O’Shea, Sales Manager, MatrixSpace
When I mention a product that brings together multiple sensors networked together with the purpose of triggering actions, what comes to your mind? Likely, smart thermostats, smart bulbs, or smart plugs. You might categorize these products under the larger umbrella of the Internet of Things (IoT). However, classifying MatrixSpace Radar system as solely part of the IoT would be a limiting perspective.
This is because IoT devices themselves have limitations. The array of smart devices that have become commonplace in our homes aren’t truly “smart” in the conventional sense. Most consist of interconnected sensors that activate actions primarily through their link to cloud-based devices. For instance, your smart thermostat adjusts the temperature based on readings from remote sensors, or your smart lights switch on at sunset through an app connected to the time of day. These devices effectively function as Boolean operators, initiating actions according to predefined conditions. Automating household tasks is extremely useful, but these sensors act merely as triggers for a sequence of If-This-Then-That events.
The real transformation occurs when we imbue sensors with processing to interpret information, often referred to as edge compute capabilities. Suddenly, these devices become genuinely intelligent. By integrating processing power, embedded AI, and machine learning functions, these sensors not only gather data from their surroundings but also interpret it. Think of a home security camera, for example, that can distinguish between a dog and a human.
But even with edge computing, there are limitations. Each sensor could end up isolated, reporting its interpreted data alongside all other sensors in the network. In scenarios such as airspace surveillance or perimeter security, this could lead to a single object being perceived as multiple potential tracks in various locations, influenced by known error margins and the sensors’ maximum detection ranges.
Enter MatrixSpace’s concept of AI Collaborative Sensing, which takes it all a step further. By enabling individual sensor nodes to communicate within the same network, a cohesive, accurate representation of a tracked object’s movement through an environment is achieved. Various airborne and ground-based sensors exchange information with each other to enhance the accuracy of the object’s position. As these nodes transmit data through the network infrastructure, additional layers of processing power can be applied at each step. This amplifies the capabilities of each sensor beyond what it could achieve independently. The outcome is sensors proficient in identifying people, vehicles, and aircraft at significant distances, with remarkable precision, under diverse lighting and weather conditions, all talking to each other and creating a more cohesive sense of their environment for the end user. This is what our CEO, Greg Waters, means when he speaks of “digitizing the outdoors”: creating a digital analogue of the outside world that humans and machines can ingest, process, and react to in real time.
The applicability of such a toolkit to tasks like airspace management, UAS detection, and site security is evident. Its broader contribution to digitization and automation is immeasurable. As we look ahead to the future, we can’t help but think of all the very unique and exciting ways a system like this can advance technology and automation.