A Motorola Solutions Podcast
A Motorola Solutions Podcast
All episodes
About
About Podcast
About Mahesh
Mahesh Saptharishi is Executive Vice President and Chief Technology Officer. He is responsible for the company’s public safety software and video security & access control solutions. He also leads the chief technology office.
Saptharishi joined Motorola Solutions in 2018 through the acquisition of Avigilon, a video security solutions company, where he served as Senior Vice President and Chief Technology Officer. Prior to Avigilon, he founded VideoIQ, a video analytics company that was acquired by Avigilon, as well as Broad Reach Security, which was later acquired by GE.
Saptharishi earned a doctoral degree in artificial intelligence from Carnegie Mellon University.
Executive Vice President and Chief Technology Officer,
Motorola Solutions, Inc.
Mahesh Saptharishi
Featured guest:
Patrick Huston
Brigadier General (ret.)
Featured guest:
Patrick Huston
Brigadier General (ret.)
Episode 1: Bold, Responsible AI with Brigadier General (ret.) Patrick Huston
Guest Bio
In the debut episode of Mahesh the Geek, Mahesh is joined by Brigadier General (ret.) Patrick Huston, who shares his unique journey from military service to becoming a legal expert in AI and cybersecurity. Mahesh and Patrick delve into topics such as: Human-machine teaming, regulatory challenges, AI's impact on intellectual property law and fair use, the need for AI risk management standards, cyber risks, open source LLM security and licensing concerns, AI's transformation of work through automation and the potential for escalating errors with AI agents. General Huston’s insights are grounded in his message for those considering leveraging AI – be bold, be responsible and be flexible.General Huston is an engineer, a soldier, a helicopter pilot, a lawyer, a technologist and a corporate board member. He’s a renowned strategist, speaker and author on AI, cybersecurity and quantum computing. He is a Certified Director in the National Association of Corporate Directors, he’s on the FBI’s Scientific Working Group on AI, on the American Bar Association's (ABA) AI Task Force, and on the Association of Corporate Counsel’s (ACC) Cybersecurity Board.
May 14, 2025. Episode time: 44:30 min
Add Patrick on LinkedIn
Mahesh Saptharishi is the chief technology officer at Motorola Solutions, and a person obsessively interested in all things tech.
A geek is one who is knowledgeable and obsessed, and this podcast is Mahesh’s attempt to seek knowledge. Mahesh the Geek will delve into the core of mission-critical AI – technology that safeguards lives, communities and essential daily services. In this series, we'll be exploring the science, the challenges and the incredible potential of AI when it matters most. During each episode, Mahesh will talk to the experts and ask them one crucial question: what is mission-critical AI, and how is it shaping our future?
Mahesh Saptharishi is the Chief Technology Officer at Motorola Solutions, and a person obsessively interested in all things tech.
A geek is one who is knowledgeable and obsessed, and this podcast is Mahesh’s attempt to seek knowledge. Mahesh the Geek will delve into the core of mission-critical AI – technology that safeguards lives, communities and essential daily services. In this series, we'll be exploring the science, the challenges and the incredible potential of AI when it matters most. During each episode, Mahesh will talk to the experts and ask them one crucial question: what is mission-critical AI, and how is it shaping our future?
Subscribe to Mahesh the Geek
Subscribe to Mahesh the Geek
Subscribe to Mahesh the Geek
Add Mahesh on LinkedIn
Add Patrick on LinkedIn
Add Mahesh on LinkedIn
Subscribe to Mahesh the Geek
Appearances
Motorola Solutions EVP & CTO Mahesh Saptharishi joins Bloomberg Intelligence tech analyst Woo Jin Ho on the Tech Disruptors podcast to talk about how the company is deploying AI. He shares how Motorola Solutions is building AI into its hardware and software to unlock new capabilities—while putting guardrails in place to ensure responsible use. Its AI for public safety, Assist, is designed to boost productivity and bring automation, situational awareness, and real-time insights to first responders—where every second matters.
How Motorola Solutions Is Building Smarter Public Safety With AI: Tech Disruptors
Bloomberg Intelligence - Tech Disrupters
June 3, 2025
AI and human-centered design
In this episode of the User Friendly podcast, Mahesh Saptharishi, Executive Vice President and Chief Technology Officer, Motorola Solutions, and Nitin Mittal, Global AI Business Leader, Deloitte Consulting LLP, join host Hanish Patel to explore the future of AI and the new role of human-centered design. They discuss the opportunities and risks presented by generative AI, why establishing guardrails is a business imperative, and the impact of AI on the future of work, trust, and human-machine interaction.
Deloitte - User Friendly
May 25, 2023
Show Notes
Human and Machine Teaming / Human Augmentation: This concept involves combining the respective strengths of humans and machines. Humans excel at leadership, common sense, empathy and humor. Machines outperform humans in tasks like ingesting mass data, rapid data computations, or handling repetitive tasks where human attention wanes. The goal is not choosing between humans or machines, but leveraging the best of both worlds.
Key AI Principles: Fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability. These principles are generally universal, but their application and implementation vary significantly by country or region, as seen with Europe's stricter data privacy rules versus the United States' patchwork of state and local laws.
General Huston’s Advice for Adopting AI Applications:
Be bold: Leverage AI to remain competitive.
Be responsible: Understand and actively mitigate risks; AI is not a magic solution.
Be flexible: Be ready to pivot, adapt, and fine-tune your approach as some things will work well and others won't.
In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep ethnographic research – unveils a powerful...
Guest bio:
Episode 3 Part 1: Anthropology's Key to Robust AI with Professor Martin Holbraad
August 11, 2025. Episode time: 38:49 min
Bloomberg Intelligence - Tech Disrupters
Motorola Solutions EVP & CTO Mahesh Saptharishi joins Bloomberg Intelligence tech analyst Woo Jin Ho on the Tech Disruptors podcast to talk about how the company is deploying AI. He shares how Motorola Solutions is building AI into its hardware and software to unlock new capabilities—while putting guardrails in place to ensure responsible use. Its AI for public safety, Assist, is designed to boost productivity and bring automation, situational awareness, and real-time insights to first responders—where every second matters.
How Motorola Solutions Is Building Smarter Public Safety With AI: Tech Disruptors
June 3, 2025
In the second episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss Prevention Research Council (LPRC), as they explore the evolution of loss prevention, emphasizing the importance of prevention over response in public safety. They discuss the integration of technology, such as AI and body-worn cameras, in enhancing crime detection and prevention. The dialogue also highlights the significance of collaboration between retailers and law enforcement, the challenges of data sharing and the behavioral cues that can indicate potential criminal activity. Mahesh and Dr. Hayes also discuss insights into future trends in crime prevention and the role of technology in shaping these developments.
Read Hayes, PhD is a Research Scientist and Criminologist at the University of Florida, and Director of the LPRC. The LPRC includes 100 major retail corporations, multiple law enforcement agencies, trade associations and more than 170 protective solution/tech partners working together year-round in the field, in VR and in simulation labs with scientists and practitioners to increase people and place safety by reducing theft, fraud and violence. Dr. Hayes has authored four books and more than 320 journal and trade articles. Learn more about the LPRC and its extensive research:
https://lpresearch.org/research/
Add Read on LinkedIn
Add Read on LinkedIn
Guest Bio
Episode 2: Safer retail experiences through AI with Dr. Read Hayes
June 30, 2025. Episode time: 37:03 min
Guest Bio
“A police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance.”
Join us as Mahesh welcomes back Professor Martin Holbraad, a leading anthropologist and director of the Ethnographic Insights Lab at University College London. This episode delves into the idea that anthropology offers more than cultural interpretation; it provides a radically different way of thinking about systems. Using frameworks like actor-network theory, you’re invited to rethink agency, not as something humans possess, but as something co-produced in the relationships between tools, practices, people and policies. This fundamentally changes how AI is understood, not as a "ghost in the machine," but as an active participant in dynamic, shifting networks where meaning, power and responsibility are constantly negotiated.
As Mahesh and Martin discuss, a police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance. These overlapping realities that exist within the same system are never neutral; they are shaped by power, pressure and purpose.
For those who seek to be on the cutting edge of innovation, anthropology reminds us that imagination is a method. Every system encodes assumptions about the world, and at Motorola Solutions, we believe those assumptions can always be questioned, rethought and reimagined to better serve the mission-critical needs of public safety professionals around the world.
Guest Bio
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true. And it's almost a mistake to think of machines as operating only by themselves."
In this episode, Mahesh speaks with Professor James Landay, a leading expert in human-computer interaction and co-founder of the Human-Centered AI Institute at Stanford. This episode explores the complexities of designing AI systems for high-stakes environments like public safety and enterprise security, a core focus for Motorola Solutions.
Professor Landay introduces the principles of human-centered AI, emphasizing its power to augment human capabilities rather than replace them. Discover how “human-AI collaboration can lead to superintelligence faster than AI acting alone.” The conversation also delves into the crucial shift from user-centered design to “community-centered and society-centered design,” acknowledging AI's broader impact beyond the immediate user.
Finally, Professor Landay shares invaluable advice for today's developers, underscoring the responsibility of “managing AI's benefits and harms through better design ethics and intentional collaboration,” particularly relevant for those building solutions for the safety and security of communities.
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true..."
the podcast
the podcast
the podcast
the podcast
All Episodes
About
Professor of Computer Science at Stanford University,
Co-Director Stanford Institute for Human-Centered AI (HAI)
James Landay
“A police report isn't merely a record; it can be a memory aid, a legal artifact, a shield, or even a performance..."
Leading Anthropologist and Director of the Ethnographic Insights Lab at University College, London
United Kingdon
Martin Holbraad
theAI podcast by Motorola Solutions
Show Notes
Professor Landay advocates for the discipline of Human-Computer Interaction (HCI) as at the intersection of art and science. Seen in this way, the “art” of creative thinking and mobilising the imagination to think through complex human problems meets the evaluative rigour of scientific inquiry.
User-Centered Design (UCD): The traditional design approach that focuses on the direct user. Although still needed, there is a growing imperative within the field to go beyond this individuating focus and think more deeply about the ripple effects of design beyond a single user into their broader communal and social realities.
Community-Centered Design: A shift from UCD, necessary because AI applications often impact a broader community beyond the direct user. This approach involves broadening the design lens to engage the community—whoever is impacted by the system—in the design process, including interviewing, observing and testing.
Society-Centered Design: The highest level of design consideration for systems that become ubiquitous and have societal level impacts. Achieving this level often requires involving disciplines from the social sciences, humanities and the arts in AI development teams.
Human Centered AI (HCAI): A philosophy and research principle centered on shaping AI in alignment with human values. Its core principles include emphasizing design and ethics from the beginning, during and after development, and augmenting human potential rather than replacing or reducing it.
Augmenting humans rather than replacing them: A core principle of HCAI that advocates for designing AIs to be symbiotic with people. The goal is to let people do the pieces of a job they are best at and enjoy, while having machines handle the parts that are repetitive, tedious or better suited for machines.
Human-AI Collaboration (Teaming): The key to improved performance, where a joint human-AI system performs better than either the AI or the human alone. This collaboration needs to be personalized, meaning the AI adapts to the human's strengths, and the human adapts their usage to the AI partner.
Superintelligence (through collaboration): The idea that intelligence has always been a collective, socially distributed phenomenon. As Landay puts it, every apparent leap of “super intelligence” - like putting a man on the moon - has been an emergent property of human cooperation. Extending that logic, if artificial superintelligence ever does emerge, it’s unlikely to appear as a sudden, independent breakthrough. Instead, it will arise through the evolving collaboration between humans and AI systems - as a product of our shared sociotechnical networks, not a replacement for them.
AI Time: A metaphor used to describe the speed of progress in the AI industry, suggesting that AI time is moving at 10X—meaning five or ten normal Earth years happen in only one year in AI time.
Episode 4: Empathy, AI and the Future of Design with Professor James Landay
October 21, 2025
Add James on LinkedIn
Add James on LinkedIn
Show Notes
Agency: In the context of actor-network theory, agency is not limited to humans but is distributed across the entire network of people and things. The term "actant" is used instead of "actor" to acknowledge that non-human elements can also have agency.
Symmetry: A principle in actor-network theory that suggests treating modern or "Western" societies with the same frameworks used to study other societies, which are often labeled as "non-modern" or "non-Western". Bruno Latour was very keen on this concept, which challenges the idea of "purifying" the world into distinct categories like nature and culture or things versus people.
Ontological Multiplicity: The idea that things are not just one thing but can have multiple, different, and even ambiguous meanings or constitutions across different social situations, spaces, and times. For example, an incident report can be a record of a memory, a performance of professional competence, a legal artifact and a shield. This concept suggests that a system can contain "overlapping realities".
Abduction: A form of reasoning that is neither deductive nor purely inductive. It involves coming up with the "best understanding given the facts" and making constant, reciprocal adjustments. Abduction is a way to navigate a dynamic system that is constantly evolving and unpredictable.
Episode 3 Part 2: Designing for Trust: Power, Policing and AI with Professor Martin Holbraad
August 26, 2025
Add Read on LinkedIn
Add Martin on LinkedIn
Read more
Read more
Appearances
Appearances
Guest Bio
“In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s Ethnographic Insights Lab unravel the transformative power of anthropology. Far more than just studying cultures, anthropology – through its deep ethnographic research – unveils a powerful truth: to build truly effective AI, you must first understand the very fabric of human knowledge. This is more than just studying cultures; it’s about redefining what it means to design for a world where "AI changes the meaning of reports themselves" and the line between "where the machine ends and the person starts" blurs. This first part of a two-episode series plunges into the fundamental questions of human memory and how anthropological thinking can conquer the complex challenges of AI, as seen in Motorola Solutions and UCL’s groundbreaking work on police incident reporting. Discover why understanding "what kind of world you're building for" isn't just an essential first step, it’s the only step to crafting tools that truly serve humanity.
Martin Holbraad is Professor of Social Anthropology at University College London (UCL). He has conducted anthropological research in Cuba since the late 1990s, on the intersection of politics and ritual practices, producing works including Truth in Motion: The Recursive Anthropology of Cuban Divination (Chicago, 2012) and Shapes in Revolution: The Political Morphology of Cuban Life (Cambridge, 2025). He has made significant contributions to anthropological theory, including in his co-authored volume The Ontological Turn: An Anthropological Exposition (Cambridge, 2016). He is Director of the Ethnographic Insights Lab (EI-Lab), which he founded at UCL in 2020 as a consultancy dedicated to helping organizations better understand their customers and users, as well as themselves. EI-Lab’s tagline is “the problem is you don’t know what the problem is.”
Add Read on LinkedIn
Add Martin on LinkedIn
Leading Anthropologist and Director of the Ethnographic Insights Lab at University College, London
United Kingdon
Martin Holbraad
Show Notes
Ethnographic Research/Fieldwork: This is the methodological approach highlighted as crucial for uncovering ontological multiplicity. By conducting in-depth fieldwork with actual users, researchers can understand how tools are truly conceived and used in their everyday lives, often revealing perspectives that differ significantly from designers' initial assumptions.
Actor Network Theory: Latour’s actor-network theory states that human and non-human actors form shifting networks of relationships that define situations and determine outcomes. It is a constructivist approach arguing that society, organizations, ideas and other key elements are shaped by the interactions between actors in diverse networks rather than having inherent fixed structures or meanings.
Camera conformity: When officers review body-worn camera footage before writing reports, they may unconsciously adjust their accounts to match what’s on video, omitting details they personally recall but aren’t visible in the footage.
Memory contamination: Exposure to AI-generated or external content can introduce errors into an officer’s memory, causing them to unintentionally overwrite or alter their own recollections with inaccurate information.
Cognitive offloading: Relying on AI to generate reports or recall details can reduce the need for officers to actively use their own memory, potentially weakening recall when they most need it.
Incident Reporting Tools and Police Officers: A prime example illustrating ontological multiplicity involves police incident reports. While developers might assume a report is solely a "record as faithful as possible of the officer's subjective recall," ethnographic research revealed that for officers, it is also a "performance of [their] professional competence," designed to convince a jury, promotions panel, or complaints panel of their effectiveness. This demonstrates how one "thing" (an incident report) can be ontologically multiple, serving different purposes simultaneously.
Episode 3 Part 1: Anthropology's Key to Robust AI with Professor Martin Holbraad
August 11, 2025
“In this episode, Mahesh and leading anthropologist Professor Martin Holbraad of University College London’s..."
Read more
"In the debut episode of Mahesh the Geek, Mahesh is joined by Brigadier General (ret.) Patrick Huston, who shares..."
Read more
"In the second episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss..."
Read more
General Counsel / Army General (ret.) / Board Member
Brigadier General (ret.) Patrick Huston
Research Scientist
University of Florida & Loss Prevention Research Council (LPRC)
Read Hayes
Show Notes
Loss Prevention Research Council (LPRC): The Loss Prevention Research Council is an active community of researchers, retailers, solution partners, manufacturers, law enforcement professionals, and others who believe research and collaboration will lead to a safer world for shoppers and businesses.Public and Private Collaboration/Partnerships: The critical need for law enforcement and private enterprises (like retailers) to work together to address crime, especially in real-time information sharing.
Real-time Crime Integrations/Pre-crime Interventions: The goal of achieving immediate data exchange and proactive measures before and during a crime event, contrasting with traditional forensic, after-the-fact investigations.
The "Affect, Connect, Detect" Model: This core framework leverages the scientific method to understand and counter criminal activity.
Affect: This involves understanding the "initiation and progression" of a crime, similar to a medical pathology, and figuring out how to impact that progression to make it harder, riskier, or less rewarding for offenders.
Detect: The goal is earlier detection of criminal intent or activity. This is achieved by arraying sensors (digital, aural, visual, textual) to pick up indicators before, during and after a crime, such as online bragging or coordinating activities.
Connect: This emphasizes information sharing and collaboration. It involves three levels: Connect1 (smart and connected places, enhancing a place manager's awareness), Connect2 (smart connected enterprises, sharing information between stores, e.g., "hot lists"), and Connect3 (smart connected communities, partnering with law enforcement and other organizations beyond the enterprise).
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true..."
Read more
In the debut episode of Mahesh the Geek, Mahesh is joined by Brigadier General (ret.) Patrick Huston, who shares his unique journey from military service to becoming a legal expert in AI and cybersecurity. Mahesh and Patrick delve into topics such as: Human-machine teaming, regulatory challenges, AI's impact on intellectual property law and fair use, the need for AI risk management standards, cyber risks, open source LLM security and licensing concerns, AI's transformation of work through automation and the potential for escalating errors with AI agents. General Huston’s insights are grounded in his message for those considering leveraging AI – be bold, be responsible and be flexible.General Huston is an engineer, a soldier, a helicopter pilot, a lawyer, a technologist and a corporate board member. He’s a renowned strategist, speaker and author on AI, cybersecurity and quantum computing. He is a Certified Director in the National Association of Corporate Directors, he’s on the FBI’s Scientific Working Group on AI, on the American Bar Association's (ABA) AI Task Force, and on the Association of Corporate Counsel’s (ACC) Cybersecurity Board.
Guest Bio
Add Patrick on LinkedIn
Episode 1: Bold, Responsible AI with Brigadier General (ret.) Patrick Huston
May 14, 2025. Episode time: 44:30 min
Guest Bio
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true. And it's almost a mistake to think of machines as operating only by themselves."
In this episode, Mahesh speaks with Professor James Landay, a leading expert in human-computer interaction and co-founder of the Human-Centered AI Institute at Stanford. This episode explores the complexities of designing AI systems for high-stakes environments like public safety and enterprise security, a core focus for Motorola Solutions.
Professor Landay introduces the principles of human-centered AI, emphasizing its power to augment human capabilities rather than replace them. Discover how “human-AI collaboration can lead to superintelligence faster than AI acting alone.” The conversation also delves into the crucial shift from user-centered design to “community-centered and society-centered design,” acknowledging AI's broader impact beyond the immediate user.
Finally, Professor Landay shares invaluable advice for today's developers, underscoring the responsibility of “managing AI's benefits and harms through better design ethics and intentional collaboration,” particularly relevant for those building solutions for the safety and security of communities.
Add James on LinkedIn
Add James on LinkedIn
Professor of Computer Science at Stanford University,
Co-Director Stanford Institute for Human-Centered AI (HAI)
James Landay
A Motorola Solutions podcast
"In the first of a two-part conversation, Mahesh welcomes Professor Krzysztof Gajos, lead of the Intelligent Interactive Systems Group at Harvard, to challenge the..."
Read more
Guest Bio
In the first of a two-part conversation, Mahesh welcomes Professor Krzysztof Gajos, lead of the Intelligent Interactive Systems Group at Harvard, to challenge the common assumption that human + AI is always better than either alone.
Professor Gajos takes us deep into the fascinating, messy problem space of human-AI collaboration, revealing these configurations to be inherently fragile and contingent. The discussion dissects how specific design failures—including over-reliance on incorrect advice, increased cognitive load, poorly conceived delegation models, and interface design—can undermine decision quality, de-skill users, and create perverse incentive structures that ultimately undermine the very goals of the systems themselves.
Across this wide-ranging conversation, Professor Gajos’ emphasises the need for worker-centric AI systems that prioritize human competence, learning, and autonomy over clamors for what are all too often superficial efficiency gains. Discover why thoughtful AI design must start with a deep understanding of the cognitive work people actually perform.
Add Read on LinkedIn
Add Krzysztof on LinkedIn
Show Notes
Software Bloat: A phenomenon (observed around the early 2000s in software like Microsoft Office) where consumer software becomes so complex and feature-rich that people have trouble navigating it and use only a small subset of its overall capability.
Need for Cognition: A psychological concept that refers to an individual’s tendency to enjoy, seek out, or feel motivated by effortful cognitive tasks. The cited study found that people with a high need for cognition were more likely to use AI-generated shortcuts than those with a lower need for cognition.
Intervention Generated Inequalities: An unintended consequence where an intelligent user interface, which appears to make people on average more efficient, may increase the gap between users by providing a greater benefit to those who are already more successful (e.g., people with high need for cognition).
Cognitive Forcing: An intervention technique, often used in medical decision-making literature, that interrupts a person's decision-making process to nudge them toward more analytical, less heuristic thinking. In AI, this was explored by confronting a person's decision with the AI's opposing view and reasons.
Worker-centric AIs: A proposed goal for AI design that focuses on supporting things important to the person doing the work, such as their sense of competence (supporting learning on the job) and autonomy, as opposed to solely focusing on decision accuracy or efficiency.
Delegation Model (vs. Partnership Model): The discussion points out that the primary goal of much AI assistance today is efficiency, which constitutes a delegation model. This model "comes with a lot of unintended consequences," including the risk of de-skilling, unlike a potential partnership model that incorporates more domain understanding and better cognitive engagement.
Ep. 5 Part 1: The Fragile Science of Human-AI Teams with Professor Krzysztof Gajos
December 16, 2025
Lead of the Intelligent Interactive
Systems Group at Harvard
Krzysztof Gajos
Guest Bio
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true. And it's almost a mistake to think of machines as operating only by themselves."
In this episode, Mahesh speaks with Professor James Landay, a leading expert in human-computer interaction and co-founder of the Human-Centered AI Institute at Stanford. This episode explores the complexities of designing AI systems for high-stakes environments like public safety and enterprise security, a core focus for Motorola Solutions.
Professor Landay introduces the principles of human-centered AI, emphasizing its power to augment human capabilities rather than replace them. Discover how “human-AI collaboration can lead to superintelligence faster than AI acting alone.” The conversation also delves into the crucial shift from user-centered design to “community-centered and society-centered design,” acknowledging AI's broader impact beyond the immediate user.
Finally, Professor Landay shares invaluable advice for today's developers, underscoring the responsibility of “managing AI's benefits and harms through better design ethics and intentional collaboration,” particularly relevant for those building solutions for the safety and security of communities.
Add James on LinkedIn
Add James on LinkedIn
Show Notes
Professor Landay advocates for the discipline of Human-Computer Interaction (HCI) as at the intersection of art and science. Seen in this way, the “art” of creative thinking and mobilising the imagination to think through complex human problems meets the evaluative rigour of scientific inquiry.
User-Centered Design (UCD): The traditional design approach that focuses on the direct user. Although still needed, there is a growing imperative within the field to go beyond this individuating focus and think more deeply about the ripple effects of design beyond a single user into their broader communal and social realities.
Community-Centered Design: A shift from UCD, necessary because AI applications often impact a broader community beyond the direct user. This approach involves broadening the design lens to engage the community—whoever is impacted by the system—in the design process, including interviewing, observing and testing.
Society-Centered Design: The highest level of design consideration for systems that become ubiquitous and have societal level impacts. Achieving this level often requires involving disciplines from the social sciences, humanities and the arts in AI development teams.
Human Centered AI (HCAI): A philosophy and research principle centered on shaping AI in alignment with human values. Its core principles include emphasizing design and ethics from the beginning, during and after development, and augmenting human potential rather than replacing or reducing it.
Augmenting humans rather than replacing them: A core principle of HCAI that advocates for designing AIs to be symbiotic with people. The goal is to let people do the pieces of a job they are best at and enjoy, while having machines handle the parts that are repetitive, tedious or better suited for machines.
Human-AI Collaboration (Teaming): The key to improved performance, where a joint human-AI system performs better than either the AI or the human alone. This collaboration needs to be personalized, meaning the AI adapts to the human's strengths, and the human adapts their usage to the AI partner.
Superintelligence (through collaboration): The idea that intelligence has always been a collective, socially distributed phenomenon. As Landay puts it, every apparent leap of “super intelligence” - like putting a man on the moon - has been an emergent property of human cooperation. Extending that logic, if artificial superintelligence ever does emerge, it’s unlikely to appear as a sudden, independent breakthrough. Instead, it will arise through the evolving collaboration between humans and AI systems - as a product of our shared sociotechnical networks, not a replacement for them.
AI Time: A metaphor used to describe the speed of progress in the AI industry, suggesting that AI time is moving at 10X—meaning five or ten normal Earth years happen in only one year in AI time.
Episode 4: Empathy, AI and the Future of Design with Professor James Landay
October 21, 2025
Professor of Computer Science at Stanford University,
Co-Director Stanford Institute for Human-Centered AI (HAI)
James Landay
"Will AGI (Artificial General Intelligence) be achieved faster by humans working with machines? I actually think that is true..."
Read more
