Martyn’s Law (also known as ‘Protect Duty’) could forever change the landscape of event security if changes to legislation are passed. Some would argue it already has.
In 2017, just as concertgoers were leaving the Manchester Arena, a terrorist detonated an improvised explosive device in a suicide attack killing 22 and injuring more than 250. The mother of one of the victims, Martyn Hett, has tirelessly campaigned for tighter security and a duty of care to be placed upon venues to protect their patrons. As a result, Martyn’s Law (‘Protect Duty’) has been proposed in UK legislation to protect the public from terrorism. At the same time, other global trends have indicated the need for action on this front. The Global Terrorism Index 2020, for instance, reported a steep increase in far-right attacks in North America, Western Europe, and Oceania, stating a 250% rise since 2014, with a 709% increase in deaths over the same period. But, how do we implement the measures proposed by Martyn’s law without intruding on our lives through mass surveillance?
Traditionally, cameras and CCTV have been the go-to solution for monitoring. However, maintaining a comprehensive view of locations with complex layouts or venues that host large crowds and gatherings can be a challenging and labour intensive task for operatives. Camera outputs have been designed to be interpreted by people, which, in turn, requires significant human resource that’s liable to inconsistent levels of accuracy in complex environments where getting things wrong can have a catastrophic impact.
Fortunately, technology is evolving. AI-based perception strategies are being developed alongside advancements in 3D data capture technologies — including lidar, radar and ToF cameras — that are capable of transforming surveillance with enhanced layers of autonomy and intelligence. As a result, smart, automated systems will be able to work alongside the security workforce to provide an always on, omniscient view of the environment, delivering highly accurate insights and actionable data. And, with the right approach, this can be achieved without undue impact on our rights as private citizens.
Closing the gap
While much of this innovation isn’t new, it has been held back from at-scale adoption due to the gaps that remain between the data that’s captured and the machine’s ability to process it into an actionable insight.
In security, for example, this gap is most present when it comes to addressing occlusion (in other words, recognising objects that move in and out of view of the sensors scanning a space). For security systems to provide the high levels of accuracy required in high traffic environments, such as concert venues, it’s crucial that they are able to detect all individuals and track their behaviour as they interact with a space and those within it. This, of course, is possible using multiple sensor modes. However, without the right perception platform to interpret the data being captured, the risk of missing crucial events as a result of the machine misinterpreting a partially concealed individual as an inanimate object, for instance, is significant.
This gap is narrowing, and thanks to the first wave of sensor innovators, this shift in dependence from video read by people to 3D data point clouds read by machines has meant that we are now able to capture much richer information and data sets that can precisely detect and classify objects and behaviours — without capturing biometric and identifiable personal data. But what we need to fully close the gap are perception strategies and approaches that can adapt to the ever-changing nature of real world environments.
Until now, this has been a lengthy and costly process requiring those implementing or developing solutions to start from scratch in developing software, algorithms and training data every time the context or sensor mode is changed. But, by combining proven 3D sensor technologies like lidar with deep learning first approach, this needed be the case.
That’s why we are developing an adaptive edge processing platform for lidar that’s capable of understanding past and present behaviour of people and objects within a given area. Through deep learning, it can predict near-future behaviour of each object with some degree of certainty, thereby accurately and consistently generating real-time data and tracking the movement of people in the secured environment at scale.
This approach has value beyond security. Facilities teams, for example, can extract a wealth of information beyond the primary function of security to support other priorities such as cleaning (tracking facility usage so that schedules can be adjusted), while retailers can optimise advertising and display efforts by identifying areas of high footfall. Likewise, health and safety teams can gather much deeper insights into the way spaces are used to enhance processes and measures to protect their users.
Perception, not surveillance
As I’ve explained, perception is reaching new levels of sophistication through deep learning. By continually programming limitless scenarios, our approach can provide consistently accurate and rich data that users can trust.
This will ultimately change the way we manage environments at a time when liability comes with ever increasing consequences.
For venue providers, Martyn’s Law will leave them with no option but to rethink their approach to security and safety. But, with new, smarter, more accurate tools at their disposal that will enable them to predict and protect, rather than just react, risks — both human and commercial — can be addressed. Meanwhile, the public can take comfort in knowing that measures to keep them safe needn’t mean sacrificing their privacy.
This article first appeared on SourceSecurity.com.