Your Undivided Attention

Tristan Harris and Aza Raskin, The Center for Humane Technology

In our podcast, Your Undivided Attention, co-hosts Tristan Harris, Aza Raskin and Daniel Barcay explore the unprecedented power of emerging technologies: how they fit into our lives, and how they fit into a humane future. Join us every other Thursday as we confront challenges and explore solutions with a wide range of thought leaders and change-makers — like Audrey Tang on digital democracy, neurotechnology with Nita Farahany, getting beyond dystopia with Yuval Noah Harari, and Esther Perel on Artificial Intimacy: the other AI. Your Undivided Attention is produced by Executive Editor Sasha Fegan and Senior Producer Julia Scott. Our Researcher/Producer is Joshua Lash. We are a top tech podcast worldwide with more than 20 million downloads and a member of the TED Audio Collective. read less
TechnologyTechnology

Episodes

Why Are Migrants Becoming AI Test Subjects? With Petra Molnar
20-06-2024
Why Are Migrants Becoming AI Test Subjects? With Petra Molnar
Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that’s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”RECOMMENDED MEDIAThe Walls Have Eyes: Surviving Migration in the Age of Artificial IntelligencePetra’s newly published book on the rollout of high risk tech at the border.Bots at the GateA report co-authored by Petra about Canada’s use of AI technology in their immigration process.Technological Testing GroundsA report authored by Petra about the use of experimental technology in EU border enforcement.Startup Pitched Tasing Migrants from Drones, Video RevealsAn article from The Intercept, containing the demo for Brinc’s taser drone pilot program.The UNHCRInformation about the global refugee crisis from the UN.RECOMMENDED YUA EPISODESWar is a Laboratory for AI with Paul ScharreNo One is Immune to AI Harms with Dr. Joy BuolamwiniCan We Govern AI? With Marietje SchaakeCLARIFICATION:The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019
Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn
07-06-2024
Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn
This week, a group of current and former employees from OpenAI and Google DeepMind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. RECOMMENDED MEDIA The Right to Warn Open LetterMy Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letterLeaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.RECOMMENDED YUA EPISODESA First Step Toward AI Regulation with Tom WheelerSpotlight on AI: What Would It Take For This to Go Well?Big Food, Big Tech and Big AI with Michael MossCan We Govern AI? With Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
War is a Laboratory for AI with Paul Scharre
23-05-2024
War is a Laboratory for AI with Paul Scharre
Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. RECOMMENDED MEDIAFour Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza: An investigation into the use of AI targeting systems by the IDF.RECOMMENDED YUA EPISODESThe AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoCan We Govern AI? with Marietje SchaakeBig Food, Big Tech and Big AI with Michael MossThe Invisible Cyber-War with Nicole PerlrothYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller
29-03-2024
Chips Are the Future of AI. They’re Also Incredibly Vulnerable. With Chris Miller
Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won’t ship until later this year.RECOMMENDED MEDIA Chip War: The Fight For the World’s Most Critical Technology by Chris MillerTo make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chipsGordon Moore Biography & FactsGordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023AI’s most popular chipmaker Nvidia is trying to use AI to design chips fasterNvidia's GPUs are in high demand - and the company is using AI to accelerate chip productionRECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangHow Will AI Affect the 2024 Elections? with Renee DiResta and Carl MillerThe AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoProtecting Our Freedom of Thought with Nita FarahanyYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
U.S. Senators Grilled Social Media CEOs. Will Anything Change?
13-02-2024
U.S. Senators Grilled Social Media CEOs. Will Anything Change?
Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg’s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?Clarification: Julie says that shortly after the hearing, Meta’s stock price had the biggest increase of any company in the stock market’s history. It was the biggest one-day gain by any company in Wall Street history.Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.RECOMMENDED MEDIA Get Media SavvyFounded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and familiesThe Power of One by Frances HaugenThe inside story of France’s quest to bring transparency and accountability to Big TechRECOMMENDED YUA EPISODESReal Social Media Solutions, Now with Frances HaugenA Conversation with Facebook Whistleblower Frances HaugenAre the Kids Alright?Social Media Victims Lawyer Up with Laura Marquez-GarrettYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei
18-01-2024
Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei
We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.RECOMMENDED MEDIA The Emerald podcastThe Emerald explores the human experience through a vibrant lens of myth, story, and imaginationEmbodied Ethics in The Age of AIA five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew DunnNature Nurture: Children Can Become Stewards of Our Delicate PlanetA U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animalsThe New FireAI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive orderRECOMMENDED YUA EPISODES How Will AI Affect the 2024 Elections?The AI DilemmaThe Three Rules of Humane TechAI Myths and Misconceptions Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller
21-12-2023
How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller
2024 will be the biggest election year in world history. Forty countries will hold national elections, with over two billion voters heading to the polls. In this episode of Your Undivided Attention, two experts give us a situation report on how AI will increase the risks to our elections and our democracies. Correction: Tristan says two billion people from 70 countries will be undergoing democratic elections in 2024. The number expands to 70 when non-national elections are factored in.RECOMMENDED MEDIA White House AI Executive Order Takes On Complexity of Content Integrity IssuesRenee DiResta’s piece in Tech Policy Press about content integrity within President Biden’s AI executive orderThe Stanford Internet ObservatoryA cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social mediaDemosBritain’s leading cross-party think tankInvisible Rulers: The People Who Turn Lies into Reality by Renee DiRestaPre-order Renee’s upcoming book that’s landing on shelves June 11, 2024RECOMMENDED YUA EPISODESThe Spin Doctors Are In with Renee DiRestaFrom Russia with Likes Part 1 with Renee DiRestaFrom Russia with Likes Part 2 with Renee DiRestaEsther Perel on Artificial IntimacyThe AI DilemmaA Conversation with Facebook Whistleblower Frances HaugenYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
2023 Ask Us Anything
30-11-2023
2023 Ask Us Anything
You asked, we answered. This has been a big year in the world of tech, with the rapid proliferation of artificial intelligence, acceleration of neurotechnology, and continued ethical missteps of social media. Looking back on 2023, there are still so many questions on our minds, and we know you have a lot of questions too. So we created this episode to respond to listener questions and to reflect on what lies ahead.Correction: Tristan mentions that 41 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.Correction: Tristan refers to Casey Mock as the Center for Humane Technology’s Chief Policy and Public Affairs Manager. His title is Chief Policy and Public Affairs Officer.RECOMMENDED MEDIA Tech Policy WatchMarietje Schaake curates this briefing on artificial intelligence and technology policy from around the worldThe AI Executive OrderPresident Biden’s executive order on the safe, secure, and trustworthy development and use of AIMeta sued by 42 AGs for addictive features targeting kidsA bipartisan group of 42 attorneys general is suing Meta, alleging features on Facebook and Instagram are addictive and are aimed at kids and teensRECOMMENDED YUA EPISODES The Three Rules of Humane TechTwo Million Years in Two Hours: A Conversation with Yuval Noah HarariInside the First AI Insight Forum in WashingtonDigital Democracy is Within Reach with Audrey TangThe Tech We Need for 21st Century Democracy with Divya SiddarthMind the (Perception) Gap with Dan ValloneThe AI DilemmaCan We Govern AI? with Marietje SchaakeAsk Us Anything: You Asked, We AnsweredYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish
21-11-2023
The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish
As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech? Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works. RECOMMENDED MEDIA Open-Sourcing Highly Capable Foundation ModelsThis report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AIBadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13BThis paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilitiesCentre for the Governance of AISupports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AIAI: Futures and Responsibility (AI:FAR)Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanityPalisade ResearchStudies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever RECOMMENDED YUA EPISODESA First Step Toward AI Regulation with Tom WheelerNo One is Immune to AI Harms with Dr. Joy BuolamwiniMustafa Suleyman Says We Need to Contain AI. How Do We Do It?The AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
A First Step Toward AI Regulation with Tom Wheeler
02-11-2023
A First Step Toward AI Regulation with Tom Wheeler
On Monday, Oct. 30, President Biden released a sweeping executive order that addresses many risks of artificial intelligence. Tom Wheeler, former chairman of the Federal Communications Commission, shares his insights on the order with Tristan and Aza and discusses what’s next in the push toward AI regulation. Clarification: When quoting Thomas Jefferson, Aza incorrectly says “regime” instead of “regimen.” The correct quote is: “I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. And as that becomes more developed, more enlightened, as new discoveries are made, new truths discovered, and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regime of their barbarous ancestors.” RECOMMENDED MEDIA The AI Executive OrderPresident Biden’s Executive Order on the safe, secure, and trustworthy development and use of AIUK AI Safety SummitThe summit brings together international governments, leading AI companies, civil society groups, and experts in research to consider the risks of AI and discuss how they can be mitigated through internationally coordinated actionaitreaty.orgAn open letter calling for an international AI treatyTechlash: Who Makes the Rules in the Digital Gilded Age?Praised by Kirkus Reviews as “a rock-solid plan for controlling the tech giants,” readers will be energized by Tom Wheeler’s vision of digital governance RECOMMENDED YUA EPISODESInside the First AI Insight Forum in WashingtonDigital Democracy is Within Reach with Audrey TangThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
No One is Immune to AI Harms with Dr. Joy Buolamwini
26-10-2023
No One is Immune to AI Harms with Dr. Joy Buolamwini
In this interview, Dr. Joy Buolamwini argues that algorithmic bias in AI systems poses risks to marginalized people. She challenges the assumptions of tech leaders who advocate for AI “alignment” and explains why some tech companies are hypocritical when it comes to addressing bias. Dr. Joy Buolamwini is the founder of the Algorithmic Justice League and the author of “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.”Correction: Aza says that Sam Altman, the CEO of OpenAI, predicts superintelligence in four years. Altman predicts superintelligence in ten years. RECOMMENDED MEDIAUnmasking AI by Joy Buolamwini“The conscience of the AI revolution” explains how we’ve arrived at an era of AI harms and oppression, and what we can do to avoid its pitfallsCoded BiasShalini Kantayya’s film explores the fallout of Dr. Joy’s discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us allHow I’m fighting bias in algorithmsDr. Joy’s 2016 TED Talk about her mission to fight bias in machine learning, a phenomenon she calls the "coded gaze." RECOMMENDED YUA EPISODESMustafa Suleyman Says We Need to Contain AI. How Do We Do It?Protecting Our Freedom of Thought with Nita FarahanyThe AI Dilemma Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Inside the First AI Insight Forum in Washington
19-09-2023
Inside the First AI Insight Forum in Washington
Last week, Senator Chuck Schumer brought together Congress and many of the biggest names in AI for the first closed-door AI Insight Forum in Washington, D.C. Tristan and Aza were invited speakers at the event, along with Elon Musk, Satya Nadella, Sam Altman, and other leaders. In this update on Your Undivided Attention, Tristan and Aza recount how they felt the meeting went, what they communicated in their statements, and what it felt like to critique Meta’s LLM in front of Mark Zuckerberg.Correction: In this episode, Tristan says GPT-3 couldn’t find vulnerabilities in code. GPT-3 could find security vulnerabilities, but GPT-4 is exponentially better at it.RECOMMENDED MEDIA In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right’ With A.I.Elon Musk, Sam Altman, Mark Zuckerberg, Sundar Pichai and others discussed artificial intelligence with lawmakers, as tech companies strive to influence potential regulationsMajority Leader Schumer Opening Remarks For The Senate’s Inaugural AI Insight ForumSenate Majority Leader Chuck Schumer (D-NY) opened the Senate’s inaugural AI Insight ForumThe Wisdom GapAs seen in Tristan’s talk on this subject in 2022, the scope and speed of our world’s issues are accelerating and growing more complex. And yet, our ability to comprehend those challenges and respond accordingly is not matching paceRECOMMENDED YUA EPISODESSpotlight On AI: What Would It Take For This to Go Well?The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoSpotlight: Elon, Twitter and the Gladiator Arena Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Spotlight on AI: What Would It Take For This to Go Well?
12-09-2023
Spotlight on AI: What Would It Take For This to Go Well?
Where do the top Silicon Valley AI researchers really think  AI is headed? Do they have a plan if things go wrong?  In this episode, Tristan Harris and Aza Raskin reflect on the last several months of highlighting AI risk, and share their insider takes on a high-level workshop run by CHT in Silicon Valley. NOTE: Tristan refers to journalist Maria Ressa and mentions that she received 80 hate messages per hour at one point. She actually received more than 90 messages an hour.RECOMMENDED MEDIA Musk, Zuckerberg, Gates: The titans of tech will talk AI at private Capitol summitThis week will feature a series of public hearings on artificial intelligence. But all eyes will be on the closed-door gathering convened by Senate Majority Leader Chuck SchumerTakeaways from the roundtable with President Biden on artificial intelligenceTristan Harris talks about his recent meeting with President Biden to discuss regulating artificial intelligenceBiden, Harris meet with CEOs about AI risksVice President Kamala Harris met with the heads of Google, Microsoft, Anthropic, and OpenAI as the Biden administration rolled out initiatives meant to ensure that AI improves lives without putting people’s rights and safety at riskRECOMMENDED YUA EPISODES The AI DilemmaThe AI ‘Race’: China vs the US with Jeffrey Ding and Karen HaoThe Dictator’s Playbook with Maria RessaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_