Contact sales
Maritime and coastal
Border security
Counter UAS
What is mission-critical AI, and how is it shaping our future?
H2 Fusce facilisis justo a est semper, at egestas sem sodales.
Join Motorola Solutions executive vice president and chief technology officer Mahesh Saptharishi as he and AI experts explore the science, the challenges and the incredible potential of AI when it matters most.
Contact sales
Jaegar platform
MOTOROLA, MOTO, MOTOROLA SOLUTIONS and the Stylized M Logo are trademarks or registered trademarks of Motorola
Trademark Holdings, LLC and are used under license. All other trademarks are the property of their respective owners.
© 2025 Motorola Solutions, Inc. All rights reserved.
Preference Centre
Privacy Statement
MOTOROLA, MOTO, MOTOROLA SOLUTIONS and the Stylized M Logo are trademarks or registered trademarks of Motorola
Trademark Holdings, LLC and are used under license. All other trademarks are the property of their respective owners.
© 2025 Motorola Solutions, Inc. All rights reserved.
Preference Center
Privacy Statement
MOTOROLA, MOTO, MOTOROLA SOLUTIONS and the Stylized M Logo are trademarks or registered trademarks of Motorola
Trademark Holdings, LLC and are used under license. All other trademarks are the property of their respective owners.
© 2025 Motorola Solutions, Inc. All rights reserved.
Preference Centre
Privacy Statement
MOTOROLA, MOTO, MOTOROLA SOLUTIONS e o logotipo M estilizado são marcas comerciais ou marcas comerciais registradas da Motorola Trademark Holdings, LLC e são utilizadas sob licença. Todas outras marcas comerciais pertencem a seus respectivos proprietários.
© 2025 Motorola Solutions, Inc. All rights reserved.
Preferências De Comunicação
Práticas de Privacidade
MOTOROLA, MOTO, MOTOROLA SOLUTIONS et le logo stylisé M sont des marques commerciales ou des marques déposées de Motorola Trademark Holdings, LLC et sont utilisés sous licence. Toutes les autres marques appartiennent à leurs propriétaires respectifs.
© 2025 Motorola Solutions, Inc. All rights reserved.
COMMUNICATIONS PRÉFÉRÉES
Déclaration de confidentialité
モトローラ、MOTOROLA、MOTO、MOTOROLA SOLUTIONSおよびモトローラのロゴマークはMotorola Trademark Holdings, LLC.の登録商標であり、そのライセンスに基づき使用しています。文中に記載されている他の製品名やサービス名等は、各社の商標または登録商標です。
© 2025 Motorola Solutions, Inc. All rights reserved.
連絡方法に関する設定
プライバシー
MOTOROLA, MOTO, MOTOROLA SOLUTIONS 및 스타일 적용된 M 로고는 Motorola Trademark Holdings, LLC의 상표 또는 등록 상표이며 라이센스에 의거하여 사용됩니다. 기타 모든 상표는 해당 소유자의 재산입니다.
© 2025 Motorola Solutions, Inc. All rights reserved.
정보구독설정
개인정보취급방침
MOTOROLA、MOTO、MOTOROLA SOLUTIONS 和标志性的 M 徽标是 Motorola Trademark Holdings, LLC 的商标或注册商标,经许可后方可使用。所有其他商标均为其各自所有者的财产。
© 2025 Motorola Solutions, Inc. All rights reserved.
偏好设置
隐私
MOTOROLA、MOTO、MOTOROLA SOLUTIONS 和標誌性的 M 徽標是 Motorola Trademark Holdings, LLC 的商標或註冊商標,經許可後方可使用。所有其他商標均為其各自所有者的財產。
© 2025 Motorola Solutions, Inc. All rights reserved.
偏好设置
隱私
MOTOROLA, MOTO, MOTOROLA SOLUTIONS y el logotipo de la M estilizada son marcas comerciales o marcas comerciales registradas de Motorola Trademark Holdings, LLC y se utilizan bajo licencia. Todas las demás marcas comerciales son propiedad de sus respectivos dueños.
© 2025 Motorola Solutions, Inc. All rights reserved.
PREFERENCIA DE COMUNICACIÓN
Aviso de privacidad
The Jaegar
Capable of supporting uncooled (LWIR) and cooled (MWIR, including HD) thermal cameras, along with very long range low light HD visible cameras and a through shaft to allow for radar or sensor mounting on top of the Jaegar positioner.
Fully integrated, modular, multi-sensor, high powered PTZ platform
Zoom thermal lens up to:
Accuracy:
Pan and tilt speeds:
Tilt range:
Pan range:
Thermal human detection up to:
Thermal vehicle detection up to:
HD visible lens:
Additional sensors available:
20-300mm0.0001°up to 0.001° to 200° per second-90° to +90°360° continuous9.3km*28.7km*15.2mm to 500mm
HD visible cameraSWIR cameraWhite light illuminatorInfrared illuminatorLaser illuminatorSignal light gunGPS compassLaser range finderAcoustic hailer
* Ranges are based upon Johnson’s Criteria. (Human at 1.8m x 0.5m, Detection at 2 pixels, Recognition at 8 pixels and Identification at 13 pixels. 50% probability subject to environmental conditions)
** Ranges are based upon 50% probability. Detailed NVIPM calculation notes available upon request
HD visible cameraSWIR cameraWhite light illuminatorInfrared illuminatorLaser illuminatorSignal light gunGPS compassLaser range finderAcoustic hailer
Zoom thermal lens up to:
Accuracy:
Pan and tilt speeds:
Tilt range:
Pan range:
Thermal human detection up to:
Thermal vehicle detection up to:
HD visible lens:
100-1200mm0.0001°up to 0.001° to 200° per second-90° to +90°360° continuousTBC**TBC**
16.7mm to 1000mm (to 2000mm with (x2) extender)
Additional sensors available:
LWIR
MWIR
LWIR
MWIR
theAI podcast by Motorola Solutions
Listen now
Host:
Mahesh Saptharishi
Featured guest:
Patrick Huston
CEO of AI Edutainment
the AI podcast by Motorola Solutions
Follow us:
AI at Motorola Solutions
Safety & Security Ecosystem
Motorola Solutions Products
Featured guest:
Patrick Huston
CEO of AI Edutainment
the podcast
the podcast
Mahesh Saptharishi
This podcast
About Mahesh
This podcast
About Mahesh
Episodes & guest bios
Executive vice president and chief technology officer,
Motorola Solutions, Inc.
Mahesh Saptharishi is executive vice president and chief technology officer. He is responsible for the company’s public safety software and video security & access control solutions. He also leads the chief technology office.
Saptharishi joined Motorola Solutions in 2018 through the acquisition of Avigilon, a video security solutions company, where he served as senior vice president and chief technology officer. Prior to Avigilon, he founded VideoIQ, a video analytics company that was acquired by Avigilon, as well as Broad Reach Security, which was later acquired by GE.
Saptharishi earned a doctoral degree in artificial intelligence from Carnegie Mellon University.
Executive vice president and chief technology officer,
Motorola Solutions, Inc.
Mahesh Saptharishi
the podcast
the podcast
Featured guest:
Patrick Huston
Brigadier General (ret.)
Featured guest:
Patrick Huston
Brigadier General (ret.)
the podcast
the podcast
Toggle 3-tab layout on/off
About the podcast
About Mahesh
Mahesh Saptharishi is executive vice president and chief technology officer. He is responsible for the company’s public safety software and video security & access control solutions. He also leads the chief technology office.
Saptharishi joined Motorola Solutions in 2018 through the acquisition of Avigilon, a video security solutions company, where he served as senior vice president and chief technology officer. Prior to Avigilon, he founded VideoIQ, a video analytics company that was acquired by Avigilon, as well as Broad Reach Security, which was later acquired by GE.
Saptharishi earned a doctoral degree in artificial intelligence from Carnegie Mellon University.
Executive vice president and chief technology officer,
Motorola Solutions, Inc.
Mahesh Saptharishi
About Mahesh
the podcast
the podcast
the podcast
the podcast
Episode 1: Bold, Responsible AI with Brigadier General (ret.) Patrick Huston
Guest bio:
In the debut episode of Mahesh the Geek, Mahesh is joined by Brigadier General (ret.) Patrick Huston, who shares his unique journey from military service to becoming a legal expert in AI and cybersecurity. Mahesh and Patrick delve into topics such as: Human-machine teaming, regulatory challenges, AI's impact on intellectual property law and fair use, the need for AI risk management standards, cyber risks, open source LLM security and licensing concerns, AI's transformation of work through automation and the potential for escalating errors with AI agents. General Huston’s insights are grounded in his message for those considering leveraging AI – be bold, be responsible and be flexible.General Huston is an engineer, a soldier, a helicopter pilot, a lawyer, a technologist and a corporate board member. He’s a renowned strategist, speaker and author on AI, cybersecurity and quantum computing. He is a Certified Director in the National Association of Corporate Directors, he’s on the FBI’s Scientific Working Group on AI, on the American Bar Association's (ABA) AI Task Force, and on the Association of Corporate Counsel’s (ACC) Cybersecurity Board.
May 14, 2025. Episode time: 44:30 min
In the debut episode of Mahesh the Geek, Mahesh is joined by Brigadier General (ret.) Patrick Huston, who shares his unique journey from military service to becoming a legal expert in AI and cybersecurity. Mahesh and Patrick delve into topics such as: Human-machine...
show more
show less
Add Patrick on LinkedIn
Mahesh Saptharishi is the chief technology officer at Motorola Solutions, and a person obsessively interested in all things tech.
A geek is one who is knowledgeable and obsessed, and this podcast is Mahesh’s attempt to seek knowledge. Mahesh the Geek will delve into the core of mission-critical AI – technology that safeguards lives, communities and essential daily services. In this series, we'll be exploring the science, the challenges and the incredible potential of AI when it matters most. During each episode, Mahesh will talk to the experts and ask them one crucial question: what is mission-critical AI, and how is it shaping our future?
Mahesh Saptharishi is executive vice president and chief technology officer. He is responsible for the company’s public safety software and video security & access control solutions. He also leads the chief technology office.
Saptharishi joined Motorola Solutions in 2018 through the acquisition of Avigilon, a video security solutions company, where he served as senior vice president and chief technology officer. Prior to Avigilon, he founded VideoIQ, a video analytics company that was acquired by Avigilon, as well as Broad Reach Security, which was later acquired by GE.
Saptharishi earned a doctoral degree in artificial intelligence from Carnegie Mellon University.
Mahesh Saptharishi is the chief technology officer at Motorola Solutions, and a person obsessively interested in all things tech.
A geek is one who is knowledgeable and obsessed, and this podcast is Mahesh’s attempt to seek knowledge. Mahesh the Geek will delve into the core of mission-critical AI – technology that safeguards lives, communities and essential daily services. In this series, we'll be exploring the science, the challenges and the incredible potential of AI when it matters most. During each episode, Mahesh will talk to the experts and ask them one crucial question: what is mission-critical AI, and how is it shaping our future?
Mahesh Saptharishi is executive vice president and chief technology officer. He is responsible for the company’s public safety software and video security & access control solutions. He also leads the chief technology office.
Saptharishi joined Motorola Solutions in 2018 through the acquisition of Avigilon, a video security solutions company, where he served as senior vice president and chief technology officer. Prior to Avigilon, he founded VideoIQ, a video analytics company that was acquired by Avigilon, as well as Broad Reach Security, which was later acquired by GE.
Saptharishi earned a doctoral degree in artificial intelligence from Carnegie Mellon University.
Executive vice president and chief technology officer,
Motorola Solutions, Inc.
Mahesh Saptharishi
Add Mahesh on LinkedIn
Add Patrick on LinkedIn
Subscribe to Mahesh the Geek
Mahesh Saptharishi is the chief technology officer at Motorola Solutions, and a person obsessively interested in all things tech.
A geek is one who is knowledgeable and obsessed, and this podcast is Mahesh’s attempt to seek knowledge. Mahesh the Geek will delve into the core of mission-critical AI – technology that safeguards lives, communities and essential daily services. In this series, we'll be exploring the science, the challenges and the incredible potential of AI when it matters most. During each episode, Mahesh will talk to the experts and ask them one crucial question: what is mission-critical AI, and how is it shaping our future?
Add Mahesh on LinkedIn
Mahesh Saptharishi is the chief technology officer at Motorola Solutions, and a person obsessively interested in all things tech.
A geek is one who is knowledgeable and obsessed, and this podcast is Mahesh’s attempt to seek knowledge. Mahesh the Geek will delve into the core of mission-critical AI – technology that safeguards lives, communities and essential daily services. In this series, we'll be exploring the science, the challenges and the incredible potential of AI when it matters most. During each episode, Mahesh will talk to the experts and ask them one crucial question: what is mission-critical AI, and how is it shaping our future?
Subscribe to Mahesh the Geek
Subscribe to Mahesh the Geek
Subscribe to Mahesh the Geek
Subscribe to Mahesh the Geek
Subscribe to Mahesh the Geek
Subscribe to Mahesh the Geek
Add Mahesh on LinkedIn
Add Mahesh on LinkedIn
Add Mahesh on LinkedIn
Add Patrick on LinkedIn
Add Mahesh on LinkedIn
Subscribe to Mahesh the Geek
https://lpresearch.org/research/
show less
show more
In the debut episode of Mahesh the Geek, Mahesh is joined by Brigadier General (ret.) Patrick Huston, who shares his unique journey from military service to becoming a legal expert in AI and cybersecurity. Mahesh and Patrick delve into topics such as: Human-machine teaming, regulatory challenges, AI's impact on intellectual property law and fair use, the need for AI risk management standards, cyber risks, open source LLM security and licensing concerns, AI's transformation of work through automation and the potential for escalating errors with AI agents. General Huston’s insights are grounded in his message for those considering leveraging AI – be bold, be responsible and be flexible.General Huston is an engineer, a soldier, a helicopter pilot, a lawyer, a technologist and a corporate board member. He’s a renowned strategist, speaker and author on AI, cybersecurity and quantum computing. He is a Certified Director in the National Association of Corporate Directors, he’s on the FBI’s Scientific Working Group on AI, on the American Bar Association's (ABA) AI Task Force, and on the Association of Corporate Counsel’s (ACC) Cybersecurity Board.
Add Patrick on LinkedIn
Add Patrick on LinkedIn
In the debut episode of Mahesh the Geek, Mahesh is joined by Brigadier General (ret.) Patrick Huston, who shares his unique journey from military service to becoming a legal expert in AI and cybersecurity. Mahesh and Patrick delve into topics such as: Human-machine...
Guest bio:
Episode 1: Bold, Responsible AI with Brigadier General (ret.) Patrick Huston
May 14, 2025. Episode time: 44:30 min
show less
show more
In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep ethnographic research – unveils a powerful truth: to build truly effective AI, you must first understand the very fabric of human knowledge. This is more than just studying cultures; it’s about redefining what it means to design for a world where "AI changes the meaning of reports themselves" and the line between "where the machine ends and the person starts" blurs. This first part of a two-episode series plunges into the fundamental questions of human memory and how anthropological thinking can conquer the complex challenges of AI, as seen in Motorola Solutions and UCL’s groundbreaking work on police incident reporting. Discover why understanding "what kind of world you're building for" isn't just an essential first step, it’s the only step to crafting tools that truly serve humanity.
Martin Holbraad is Professor of Social Anthropology at University College London (UCL). He has conducted anthropological research in Cuba since the late 1990s, on the intersection of politics and ritual practices, producing works including Truth in Motion: The Recursive Anthropology of Cuban Divination (Chicago, 2012) and Shapes in Revolution: The Political Morphology of Cuban Life (Cambridge, 2025). He has made significant contributions to anthropological theory, including in his co-authored volume The Ontological Turn: An Anthropological Exposition (Cambridge, 2016). He is Director of the Ethnographic Insights Lab (EI-Lab), which he founded at UCL in 2020 as a consultancy dedicated to helping organizations better understand their customers and users, as well as themselves. EI-Lab’s tagline is “the problem is you don’t know what the problem is”.
Add Read on LinkedIn
Add Martin on LinkedIn
In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep...
Guest bio:
Episode 3 Part 1: Anthropology's Key to Robust AI with Professor Martin Holbraad
August 11, 2025. Episode time: 38:49 min
show less
show more
In the second episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss Prevention Research Council (LPRC), as they explore the evolution of loss prevention, emphasizing the importance of prevention over response in public safety. They discuss the integration of technology, such as AI and body-worn cameras, in enhancing crime detection and prevention. The dialogue also highlights the significance of collaboration between retailers and law enforcement, the challenges of data sharing and the behavioral cues that can indicate potential criminal activity. Mahesh and Dr. Hayes also discuss insights into future trends in crime prevention and the role of technology in shaping these developments.
Read Hayes, PhD is a Research Scientist and Criminologist at the University of Florida, and Director of the LPRC. The LPRC includes 100 major retail corporations, multiple law enforcement agencies, trade associations and more than 170 protective solution/tech partners working together year-round in the field, in VR and in simulation labs with scientists and practitioners to increase people and place safety by reducing theft, fraud and violence. Dr. Hayes has authored four books and more than 320 journal and trade articles. Learn more about the LPRC and its extensive research:
Add Patrick on LinkedIn
Add Read on LinkedIn
In the second episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss Prevention Research Council (LPRC), as they explore the evolution of loss prevention, emphasizing the importance of prevention over response in...
Guest bio:
Episode 2: Safer retail experiences through AI with Dr. Read Hayes
June 30, 2025. Episode time: 37:03 min
Podcast appearances
Show notes
Motorola Solutions EVP & CTO Mahesh Saptharishi joins Bloomberg Intelligence tech analyst Woo Jin Ho on the Tech Disruptors podcast to talk about how the company is deploying AI. He shares how Motorola Solutions is building AI into its hardware and software to unlock new capabilities—while putting guardrails in place to ensure responsible use. Its AI for public safety, Assist, is designed to boost productivity and bring automation, situational awareness, and real-time insights to first responders—where every second matters.
How Motorola Solutions Is Building Smarter Public Safety With AI: Tech Disruptors
Bloomberg Intelligence - Tech Disrupters
June 3, 2025
show less
show more
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and determine outcomes. It is a constructivist approach arguing that society, organizations, ideas and other key elements are shaped by the interactions between actors in diverse networks rather than having inherent fixed structures or meanings.
Camera conformity: When officers review body-worn camera footage before writing reports, they may unconsciously adjust their accounts to match what’s on video, omitting details they personally recall but aren’t visible in the footage.
Memory contamination: Exposure to AI-generated or external content can introduce errors into an officer’s memory, causing them to unintentionally overwrite or alter their own recollections with inaccurate information.
Cognitive offloading: Relying on AI to generate reports or recall details can reduce the need for officers to actively use their own memory, potentially weakening recall when they most need it.
Incident Reporting Tools and Police Officers: A prime example illustrating ontological multiplicity involves police incident reports. While developers might assume a report is solely a "record as faithful as possible of the officer's subjective recall," ethnographic research revealed that for officers, it is also a "performance of [their] professional competence," designed to convince a jury, promotions panel, or complaints panel of their effectiveness. This demonstrates how one "thing" (an incident report) can be ontologically multiple, serving different purposes simultaneously.
Add Read on LinkedIn
Add Martin on LinkedIn
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and...
Guest bio:
Episode 3 Part 1: Anthropology's Key to Robust AI with Professor Martin Holbraad
August 11, 2025
AI and human-centered design
In this episode of the User Friendly podcast, Mahesh Saptharishi, Executive Vice President and Chief Technology Officer, Motorola Solutions, and Nitin Mittal, Global AI Business Leader, Deloitte Consulting LLP, join host Hanish Patel to explore the future of AI and the new role of human-centered design. They discuss the opportunities and risks presented by generative AI, why establishing guardrails is a business imperative, and the impact of AI on the future of work, trust, and human-machine interaction.
Deloitte - User Friendly
May 25, 2023
show less
show more
Loss Prevention Research Council (LPRC): The Loss Prevention Research Council is an active community of researchers, retailers, solution partners, manufacturers, law enforcement professionals, and others who believe research and collaboration will lead to a safer world for shoppers and businesses.Public and Private Collaboration/Partnerships: The critical need for law enforcement and private enterprises (like retailers) to work together to address crime, especially in real-time information sharing.
Real-time Crime Integrations/Pre-crime Interventions: The goal of achieving immediate data exchange and proactive measures before and during a crime event, contrasting with traditional forensic, after-the-fact investigations.
The "Affect, Connect, Detect" Model: This core framework leverages the scientific method to understand and counter criminal activity.
Affect: This involves understanding the "initiation and progression" of a crime, similar to a medical pathology, and figuring out how to impact that progression to make it harder, riskier, or less rewarding for offenders.
Detect: The goal is earlier detection of criminal intent or activity. This is achieved by arraying sensors (digital, aural, visual, textual) to pick up indicators before, during and after a crime, such as online bragging or coordinating activities.
Connect: This emphasizes information sharing and collaboration. It involves three levels: Connect1 (smart and connected places, enhancing a place manager's awareness), Connect2 (smart connected enterprises, sharing information between stores, e.g., "hot lists"), and Connect3 (smart connected communities, partnering with law enforcement and other organizations beyond the enterprise).
Add Read on LinkedIn
Add Martin on LinkedIn
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and...
Guest bio:
Episode 2: Safer Retail Experiences through AI with Dr. Read Hayes
June 30, 2025
show less
show more
Human and Machine Teaming / Human Augmentation: This concept involves combining the respective strengths of humans and machines. Humans excel at leadership, common sense, empathy and humor. Machines outperform humans in tasks like ingesting mass data, rapid data computations, or handling repetitive tasks where human attention wanes. The goal is not choosing between humans or machines, but leveraging the best of both worlds.
Key AI Principles: Fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability. These principles are generally universal, but their application and implementation vary significantly by country or region, as seen with Europe's stricter data privacy rules versus the United States' patchwork of state and local laws.
General Huston’s Advice for Adopting AI Applications:
Be bold: Leverage AI to remain competitive.
Be responsible: Understand and actively mitigate risks; AI is not a magic solution.
Be flexible: Be ready to pivot, adapt, and fine-tune your approach as some things will work well and others won't.
Add Read on LinkedIn
Add Martin on LinkedIn
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and...
Guest bio:
Episode 1: Bold, Responsible AI with Brigadier General (ret.) Patrick Huston
May 14, 2025
In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep ethnographic research – unveils a powerful...
Guest bio:
Episode 3 Part 1: Anthropology's Key to Robust AI with Professor Martin Holbraad
August 11, 2025. Episode time: 38:49 min
Bloomberg Intelligence - Tech Disrupters
Motorola Solutions EVP & CTO Mahesh Saptharishi joins Bloomberg Intelligence tech analyst Woo Jin Ho on the Tech Disruptors podcast to talk about how the company is deploying AI. He shares how Motorola Solutions is building AI into its hardware and software to unlock new capabilities—while putting guardrails in place to ensure responsible use. Its AI for public safety, Assist, is designed to boost productivity and bring automation, situational awareness, and real-time insights to first responders—where every second matters.
How Motorola Solutions Is Building Smarter Public Safety With AI: Tech Disruptors
June 3, 2025
Bloomberg Intelligence - Tech Disrupters
Motorola Solutions EVP & CTO Mahesh Saptharishi joins Bloomberg Intelligence tech analyst Woo Jin Ho on the Tech Disruptors podcast to talk about how the company is deploying AI. He shares how Motorola Solutions is building AI into its hardware and software to unlock new capabilities—while putting guardrails in place to ensure responsible use. Its AI for public safety, Assist, is designed to boost productivity and bring automation, situational awareness, and real-time insights to first responders—where every second matters.
How Motorola Solutions Is Building Smarter Public Safety With AI: Tech Disruptors
June 3, 2025
Episode 3, Part 1:
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and determine outcomes.
It is a constructivist approach arguing that society, organizations, ideas and other key elements are shaped by the interactions between actors in diverse networks rather than having inherent fixed structures or meanings.
Camera conformity: When officers review body-worn camera footage before writing reports, they may unconsciously adjust their accounts to match what’s on video, omitting details they personally recall but aren’t visible in the footage.
Memory contamination: Exposure to AI-generated or external content can introduce errors into an officer’s memory, causing them to unintentionally overwrite or alter their own recollections with inaccurate information.
Cognitive offloading: Relying on AI to generate reports or recall details can reduce the need for officers to actively use their own memory, potentially weakening recall when they most need it.
Incident Reporting Tools and Police Officers: A prime example illustrating ontological multiplicity involves police incident reports. While developers might assume a report is solely a "record as faithful as possible of the officer's subjective recall," ethnographic research revealed that for officers, it is also a "performance of [their] professional competence," designed to convince a jury, promotions panel, or complaints panel of their effectiveness. This demonstrates how one "thing" (an incident report) can be ontologically multiple, serving different purposes simultaneously.
Anthropology’s Key to Robust AI with Professor Martin Holbraad
August 11, 2025
Episode 2:
Loss Prevention Research Council (LPRC): The Loss Prevention Research Council is an active community of researchers, retailers, solution partners, manufacturers, law enforcement professionals, and others who believe research and collaboration will lead to a safer world for shoppers and businesses.Public and Private Collaboration/Partnerships: The critical need for law enforcement and private enterprises (like retailers) to work together to address crime, especially in real-time information sharing.
Real-time Crime Integrations/Pre-crime Interventions: The goal of achieving immediate data exchange and proactive measures before and during a crime event, contrasting with traditional forensic, after-the-fact investigations.
The "Affect, Connect, Detect" Model: This core framework leverages the scientific method to understand and counter criminal activity.
Affect: This involves understanding the "initiation and progression" of a crime, similar to a medical pathology, and figuring out how to impact that progression to make it harder, riskier, or less rewarding for offenders.
Detect: The goal is earlier detection of criminal intent or activity. This is achieved by arraying sensors (digital, aural, visual, textual) to pick up indicators before, during and after a crime, such as online bragging or coordinating activities.
Connect: This emphasizes information sharing and collaboration. It involves three levels: Connect1 (smart and connected places, enhancing a place manager's awareness), Connect2 (smart connected enterprises, sharing information between stores, e.g., "hot lists"), and Connect3 (smart connected communities, partnering with law enforcement and other organizations beyond the enterprise)
Safer Retail Experiences through AI with Dr. Read Hayes
June 30, 2025
About Mahesh
Episodes & guest bios
show less
show more
In the second episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss Prevention Research Council (LPRC), as they explore the evolution of loss prevention, emphasizing the importance of prevention over response in public safety. They discuss the integration of technology, such as AI and body-worn cameras, in enhancing crime detection and prevention. The dialogue also highlights the significance of collaboration between retailers and law enforcement, the challenges of data sharing and the behavioral cues that can indicate potential criminal activity. Mahesh and Dr. Hayes also discuss insights into future trends in crime prevention and the role of technology in shaping these developments.
Read Hayes, PhD is a Research Scientist and Criminologist at the University of Florida, and Director of the LPRC. The LPRC includes 100 major retail corporations, multiple law enforcement agencies, trade associations and more than 170 protective solution/tech partners working together year-round in the field, in VR and in simulation labs with scientists and practitioners to increase people and place safety by reducing theft, fraud and violence. Dr. Hayes has authored four books and more than 320 journal and trade articles. Learn more about the LPRC and its extensive research:
https://lpresearch.org/research/
Add Read on LinkedIn
Add Read on LinkedIn
In the second episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss Prevention Research Council (LPRC), as they explore the evolution of loss prevention, emphasizing the importance of prevention over response in...
Guest bio:
Episode 2: Safer retail experiences through AI with Dr. Read Hayes
June 30, 2025. Episode time: 37:03 min
show less
show more
“A police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance.”
Join us as Mahesh welcomes back Professor Martin Holbraad, a leading anthropologist and director of the Ethnographic Insights Lab at University College London. This episode delves into the idea that anthropology offers more than cultural interpretation; it provides a radically different way of thinking about systems. Using frameworks like actor-network theory, you’re invited to rethink agency, not as something humans possess, but as something co-produced in the relationships between tools, practices, people and policies. This fundamentally changes how AI is understood, not as a "ghost in the machine," but as an active participant in dynamic, shifting networks where meaning, power and responsibility are constantly negotiated.
As Mahesh and Martin discuss, a police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance. These overlapping realities that exist within the same system are never neutral; they are shaped by power, pressure and purpose.
For those who seek to be on the cutting edge of innovation, anthropology reminds us that imagination is a method. Every system encodes assumptions about the world, and at Motorola Solutions, we believe those assumptions can always be questioned, rethought and reimagined to better serve the mission-critical needs of public safety professionals around the world.
Add Read on LinkedIn
Add Martin on LinkedIn
“A police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance.”...
Guest bio:
Episode 3 Part 2: Designing for Trust: Power, Policing and AI with Professor Martin Holbraad
August 26, 2025. Episode time: 1:02:19 min
show less
show more
Agency: In the context of actor-network theory, agency is not limited to humans but is distributed across the entire network of people and things. The term "actant" is used instead of "actor" to acknowledge that non-human elements can also have agency.
Symmetry: A principle in actor-network theory that suggests treating modern or "Western" societies with the same frameworks used to study other societies, which are often labeled as "non-modern" or "non-Western". Bruno Latour was very keen on this concept, which challenges the idea of "purifying" the world into distinct categories like nature and culture or things versus people.
Ontological Multiplicity: The idea that things are not just one thing but can have multiple, different, and even ambiguous meanings or constitutions across different social situations, spaces, and times. For example, an incident report can be a record of a memory, a performance of professional competence, a legal artifact and a shield. This concept suggests that a system can contain "overlapping realities".
Abduction: A form of reasoning that is neither deductive nor purely inductive. It involves coming up with the "best understanding given the facts" and making constant, reciprocal adjustments. Abduction is a way to navigate a dynamic system that is constantly evolving and unpredictable.
Add Read on LinkedIn
Add Martin on LinkedIn
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and...
Guest bio:
Episode 3 Part 2: Designing for Trust: Power, Policing and AI with Professor Martin Holbraad
August 26, 2025
show less
show more
In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep ethnographic research – unveils a powerful truth: to build truly effective AI, you must first understand the very fabric of human knowledge. This is more than just studying cultures; it’s about redefining what it means to design for a world where "AI changes the meaning of reports themselves" and the line between "where the machine ends and the person starts" blurs. This first part of a two-episode series plunges into the fundamental questions of human memory and how anthropological thinking can conquer the complex challenges of AI, as seen in Motorola Solutions and UCL’s groundbreaking work on police incident reporting. Discover why understanding "what kind of world you're building for" isn't just an essential first step, it’s the only step to crafting tools that truly serve humanity.
Martin Holbraad is Professor of Social Anthropology at University College London (UCL). He has conducted anthropological research in Cuba since the late 1990s, on the intersection of politics and ritual practices, producing works including Truth in Motion: The Recursive Anthropology of Cuban Divination (Chicago, 2012) and Shapes in Revolution: The Political Morphology of Cuban Life (Cambridge, 2025). He has made significant contributions to anthropological theory, including in his co-authored volume The Ontological Turn: An Anthropological Exposition (Cambridge, 2016). He is Director of the Ethnographic Insights Lab (EI-Lab), which he founded at UCL in 2020 as a consultancy dedicated to helping organizations better understand their customers and users, as well as themselves. EI-Lab’s tagline is “the problem is you don’t know what the problem is”.
Add Patrick on LinkedIn
Add Martin on LinkedIn
In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep ethnographic research – unveils a powerful...
Guest bio:
Episode 3 Part 1: Anthropology's Key to Robust AI with Professor Martin Holbraad
August 11, 2025. Episode time: 38:49 min
Episode 3, Part 1:
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and determine outcomes.
It is a constructivist approach arguing that society, organizations, ideas and other key elements are shaped by the interactions between actors in diverse networks rather than having inherent fixed structures or meanings.
Camera conformity: When officers review body-worn camera footage before writing reports, they may unconsciously adjust their accounts to match what’s on video, omitting details they personally recall but aren’t visible in the footage.
Memory contamination: Exposure to AI-generated or external content can introduce errors into an officer’s memory, causing them to unintentionally overwrite or alter their own recollections with inaccurate information.
Cognitive offloading: Relying on AI to generate reports or recall details can reduce the need for officers to actively use their own memory, potentially weakening recall when they most need it.
Incident Reporting Tools and Police Officers: A prime example illustrating ontological multiplicity involves police incident reports. While developers might assume a report is solely a "record as faithful as possible of the officer's subjective recall," ethnographic research revealed that for officers, it is also a "performance of [their] professional competence," designed to convince a jury, promotions panel, or complaints panel of their effectiveness. This demonstrates how one "thing" (an incident report) can be ontologically multiple, serving different purposes simultaneously.
Anthropology’s Key to Robust AI with Professor Martin Holbraad
August 11, 2025
show less
show more
In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep ethnographic research – unveils a powerful truth: to build truly effective AI, you must first understand the very fabric of human knowledge. This is more than just studying cultures; it’s about redefining what it means to design for a world where "AI changes the meaning of reports themselves" and the line between "where the machine ends and the person starts" blurs. This first part of a two-episode series plunges into the fundamental questions of human memory and how anthropological thinking can conquer the complex challenges of AI, as seen in Motorola Solutions and UCL’s groundbreaking work on police incident reporting. Discover why understanding "what kind of world you're building for" isn't just an essential first step, it’s the only step to crafting tools that truly serve humanity.
Martin Holbraad is Professor of Social Anthropology at University College London (UCL). He has conducted anthropological research in Cuba since the late 1990s, on the intersection of politics and ritual practices, producing works including Truth in Motion: The Recursive Anthropology of Cuban Divination (Chicago, 2012) and Shapes in Revolution: The Political Morphology of Cuban Life (Cambridge, 2025). He has made significant contributions to anthropological theory, including in his co-authored volume The Ontological Turn: An Anthropological Exposition (Cambridge, 2016). He is Director of the Ethnographic Insights Lab (EI-Lab), which he founded at UCL in 2020 as a consultancy dedicated to helping organizations better understand their customers and users, as well as themselves. EI-Lab’s tagline is “the problem is you don’t know what the problem is”.
Add Read on LinkedIn
Add Martin on LinkedIn
In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep...
Guest bio:
Episode 3 Part 1: Anthropology's Key to Robust AI with Professor Martin Holbraad
August 11, 2025. Episode time: 38:49 min
show less
show more
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true. And it's almost a mistake to think of machines as operating only by themselves."
In this episode, Mahesh speaks with Professor James Landay, a leading expert in human-computer interaction and co-founder of the Human-Centered AI Institute at Stanford. This episode explores the complexities of designing AI systems for high-stakes environments like public safety and enterprise security, a core focus for Motorola Solutions.
Professor Landay introduces the principles of human-centered AI, emphasizing its power to augment human capabilities rather than replace them. Discover how “human-AI collaboration can lead to superintelligence faster than AI acting alone.” The conversation also delves into the crucial shift from user-centered design to “community-centered and society-centered design,” acknowledging AI's broader impact beyond the immediate user.
Finally, Professor Landay shares invaluable advice for today's developers, underscoring the responsibility of “managing AI's benefits and harms through better design ethics and intentional collaboration,” particularly relevant for those building solutions for the safety and security of communities.
Add James on LinkedIn
Add James on LinkedIn
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true. And it's almost a mistake to think of machines as operating only by themselves."...
Guest bio:
Episode 4: Empathy, AI and the Future of Design with Professor James Landay
October 21, 2025. Episode time: 0:52:39 min
show less
show more
Agency: In the context of actor-network theory, agency is not limited to humans but is distributed across the entire network of people and things. The term "actant" is used instead of "actor" to acknowledge that non-human elements can also have agency.
Symmetry: A principle in actor-network theory that suggests treating modern or "Western" societies with the same frameworks used to study other societies, which are often labeled as "non-modern" or "non-Western". Bruno Latour was very keen on this concept, which challenges the idea of "purifying" the world into distinct categories like nature and culture or things versus people.
Ontological Multiplicity: The idea that things are not just one thing but can have multiple, different, and even ambiguous meanings or constitutions across different social situations, spaces, and times. For example, an incident report can be a record of a memory, a performance of professional competence, a legal artifact and a shield. This concept suggests that a system can contain "overlapping realities".
Abduction: A form of reasoning that is neither deductive nor purely inductive. It involves coming up with the "best understanding given the facts" and making constant, reciprocal adjustments. Abduction is a way to navigate a dynamic system that is constantly evolving and unpredictable.
Add Read on LinkedIn
Add Martin on LinkedIn
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and...
Guest bio:
Episode 4: Empathy, AI and the Future of Design with Professor James Landay
October 21, 2025
Professor Landay advocates for the discipline of Human-Computer Interaction (HCI) as at the intersection of art and science. Seen in this way, the “art” of creative thinking and mobilising the imagination to think through complex human problems meets the evaluative rigour of scientific inquiry.
User-Centered Design (UCD): The traditional design approach that focuses on the direct user. Although still needed, there is a growing imperative within the field to go beyond this individuating focus and think more deeply about the ripple effects of design beyond a single user into their broader communal and social realities.
Community-Centered Design: A shift from UCD, necessary because AI applications often impact a broader community beyond the direct user. This approach involves broadening the design lens to engage the community—whoever is impacted by the system—in the design process, including interviewing, observing and testing.
Society-Centered Design: The highest level of design consideration for systems that become ubiquitous and have societal level impacts. Achieving this level often requires involving disciplines from the social sciences, humanities and the arts in AI development teams.
Human Centered AI (HCAI): A philosophy and research principle centered on shaping AI in alignment with human values. Its core principles include emphasizing design and ethics from the beginning, during and after development, and augmenting human potential rather than replacing or reducing it.
Augmenting humans rather than replacing them: A core principle of HCAI that advocates for designing AIs to be symbiotic with people. The goal is to let people do the pieces of a job they are best at and enjoy, while having machines handle the parts that are repetitive, tedious or better suited for machines.
Human-AI Collaboration (Teaming): The key to improved performance, where a joint human-AI system performs better than either the AI or the human alone. This collaboration needs to be personalized, meaning the AI adapts to the human's strengths, and the human adapts their usage to the AI partner.
Superintelligence (through collaboration): The idea that intelligence has always been a collective, socially distributed phenomenon. As Landay puts it, every apparent leap of “super intelligence” - like putting a man on the moon - has been an emergent property of human cooperation. Extending that logic, if artificial superintelligence ever does emerge, it’s unlikely to appear as a sudden, independent breakthrough. Instead, it will arise through the evolving collaboration between humans and AI systems - as a product of our shared sociotechnical networks, not a replacement for them.
AI Time: A metaphor used to describe the speed of progress in the AI industry, suggesting that AI time is moving at 10X—meaning five or ten normal Earth years happen in only one year in AI time.
show less
show more
“A police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance.”
Join us as Mahesh welcomes back Professor Martin Holbraad, a leading anthropologist and director of the Ethnographic Insights Lab at University College London. This episode delves into the idea that anthropology offers more than cultural interpretation; it provides a radically different way of thinking about systems. Using frameworks like actor-network theory, you’re invited to rethink agency, not as something humans possess, but as something co-produced in the relationships between tools, practices, people and policies. This fundamentally changes how AI is understood, not as a "ghost in the machine," but as an active participant in dynamic, shifting networks where meaning, power and responsibility are constantly negotiated.
As Mahesh and Martin discuss, a police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance. These overlapping realities that exist within the same system are never neutral; they are shaped by power, pressure and purpose.
For those who seek to be on the cutting edge of innovation, anthropology reminds us that imagination is a method. Every system encodes assumptions about the world, and at Motorola Solutions, we believe those assumptions can always be questioned, rethought and reimagined to better serve the mission-critical needs of public safety professionals around the world.
Add Patrick on LinkedIn
Add Martin on LinkedIn
“A police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance.”
Join us as Mahesh welcomes back Professor Martin Holbraad, a leading anthropologist and director of...
Guest bio:
Episode 3 Part 2: Designing for Trust: Power, Policing and AI with Professor Martin Holbraad
August 26, 2025. Episode time: 1:02:19 min
Episode 3, Part 2:
Agency: In the context of actor-network theory, agency is not limited to humans but is distributed across the entire network of people and things. The term "actant" is used instead of "actor" to acknowledge that non-human elements can also have agency.
Symmetry: A principle in actor-network theory that suggests treating modern or "Western" societies with the same frameworks used to study other societies, which are often labeled as "non-modern" or "non-Western". Bruno Latour was very keen on this concept, which challenges the idea of "purifying" the world into distinct categories like nature and culture or things versus people.
Ontological Multiplicity: The idea that things are not just one thing but can have multiple, different, and even ambiguous meanings or constitutions across different social situations, spaces, and times. For example, an incident report can be a record of a memory, a performance of professional competence, a legal artifact and a shield. This concept suggests that a system can contain "overlapping realities".
Abduction: A form of reasoning that is neither deductive nor purely inductive. It involves coming up with the "best understanding given the facts" and making constant, reciprocal adjustments. Abduction is a way to navigate a dynamic system that is constantly evolving and unpredictable.
Designing for Trust: Power, Policing and AI with Professor Martin Holbraad
August 26, 2025