Thorn’s cover photo
Thorn

Thorn

Non-profit Organizations

Manhattan Beach, CA 34,383 followers

About us

Our children are growing up in a digital world. We are experiencing a dramatic shift in how and where young people connect, learn, and play has left parents and our communities feeling helpless and uncertain about how to protect them. Predators are taking advantage of this new reality, exploiting children in digital spaces and creating new vulnerabilities for them. This global public health crisis has devastating impacts on children everywhere, while parents and caregivers struggle to navigate this new digital landscape and keep their kids safe. At Thorn, we envision a world where cutting-edge technology is proactively developed to defend children. We are an innovative nonprofit that transforms how children are protected from sexual abuse and exploitation in the digital age through cutting-edge technology, original research, and collaborative partnerships. We empower platforms and people who can protect children with the tools and resources to do so, using the power of technology to secure every kid’s right to childhood joy.

Website
http://www.thorn.org
Industry
Non-profit Organizations
Company size
51-200 employees
Headquarters
Manhattan Beach, CA
Type
Nonprofit
Founded
2012
Specialties
technology innovation and child sexual exploitation

Locations

Employees at Thorn

Updates

  • View organization page for Thorn

    34,383 followers

    CBS Evening News aired a segment Friday on a difficult and important story out of Louisiana involving AI-generated nude images of a young teen. It’s a heartbreaking case that highlights how easily generative AI can be misused to sexually harm a child. Thorn’s VP of Data Science & AI, Dr. Rebecca Portnoff, joined the segment to share our expertise, and Thorn’s recent research about deepfake nudes was also featured to provide additional context. These stories are complex and hard to comprehend – but together, we can build the awareness, technology, and safeguards needed to protect every child in a rapidly changing digital world. You can watch the CBS segment here: https://lnkd.in/dXjVNzVp

    • No alternative text description for this image
  • View organization page for Thorn

    34,383 followers

    When Thorn launched in 2012, we believed technology could be used to defend children instead of endanger them. Every year since, we’ve made technical innovations that have advanced child safety. 2025 is no different. To celebrate our 13th birthday, we’re proud to announce our new AI-powered grooming detection for text conversations in English and Spanish. Grooming in today’s digital world often begins in conversations that seem friendly or caring — where trust is built only to be exploited. Thorn’s new grooming detection feature in Safer Predict helps platforms identify grooming signs and take action, which can sometimes mean protecting children from further harm. This milestone was made possible by our donors, partners, and community, who believe technology should always be a force for good. 🎉What a way to celebrate turning 13. 🎉 #Thorn #TechThatProtects

  • View organization page for Thorn

    34,383 followers

    Public awareness of online grooming is growing, but most people still underestimate how common and manipulative it can be. Thorn’s research shows that 40% of young people have been approached online by someone trying to “befriend and manipulate” them, and over half (54%) believe grooming is a somewhat common experience for kids their age. Today’s children are growing up in digital spaces where trust can be exploited. Where someone mirrors their interests, encourages secrecy, and shifts conversations to private chats. The best protection is informed adults. When we understand how grooming happens, we can recognize red flags early and create the kind of trust that keeps kids safe. Explore how grooming works, and what every parent, caregiver, and ally can do to stop it before it starts.

    • No alternative text description for this image
  • Thorn reposted this

    View organization page for Raven

    4,577 followers

    💥 🏛️ Emailing Congress TODAY can ensure the continued authorization of the national ICAC Task Force program and help protect kids from online predators.  The Protect Our Children Act funds the ICAC Task Forces that identify predators, rescue children and prevent exploitation online. Without this funding, investigators lose critical tools and staffing while children lose lifesaving protection. Chairman Jim Jordan has the ability to ensure this vital protection passes. Your voice can be the tipping point - negotiations in D.C. will end today!     Every contact matters. When constituents speak up, leaders listen.    👉 Email Chairman Jim Jordan's office and urge support for the Protect Our Children Act: Gregory Salavec, Legislative Correspondent – gregory.salavec@mail.house.gov    Click here for sample email scripts: https://lnkd.in/efa6qynK   Your action today could help save a child's life. We need your help now!    #ProtectAct #ChildSafety #ICAC #ProtectKidsOnline #EndExploitation #PublicSafety #BipartisanAction

    • No alternative text description for this image
  • View organization page for Thorn

    34,383 followers

    In this episode of Safe Space, Lauren Haber Jonas, Head of Youth Wellbeing at OpenAI, shares how her journey from entrepreneur to trust and safety leader has shaped the way she builds for impact. Lauren’s work requires her to balance fast-moving technology with a deep understanding of the people it serves. Her path reflects what it takes to lead in this space: curiosity, collaboration, and the courage to build solutions that protect children while empowering them to thrive online. 🎧 Watch the full conversation and hear how leaders like Lauren are shaping the future of youth wellbeing and digital safety. #SafeSpacePodcast #TechForGood

  • View organization page for Thorn

    34,383 followers

    Michigan recently became the 48th state to pass laws criminalizing harmful deepfakes—specifically targeting the non-consensual creation of AI-generated sexual imagery. It’s now a misdemeanor in Michigan to create or share these kinds of deepfakes, escalating to a felony in cases of harassment, financial harm, or intent to profit. Why this matters: - Survivors of deepfake abuse have long been clear that sexual exploitation is an increasingly common use of this technology. - These laws acknowledge that creating such content is abuse, not a prank. - Every state but two (Missouri and New Mexico) now has legislation addressing deepfakes, showing how urgently lawmakers recognize the harm. Survivors deserve holistic protection from deepfake abuse -- and this is progress worth spotlighting. Read more here: https://lnkd.in/ehnXpphn

    • No alternative text description for this image
  • View organization page for Thorn

    34,383 followers

    New research from More in Common and this chart stopped us in our tracks. 📊 Parents say they’re more concerned about online safety than any other issue related to children’s safety, including mental health, road safety, or even climate change. That tells us something important: online safety is no longer a niche issue. It’s a critical parenting priority. If you’re feeling that same concern, you’re not alone—parents everywhere are navigating this same reality. Read the report to find out what actions parents want to see to better protect their kids: https://lnkd.in/gtmsZDs3 Together, we can make the internet safer for every child. #KeepKidsSafeOnline #ParentingInTheDigitalAge

    • Graph titled "Parents are more concerned about online safety than any other issue related to children's safety." It displays various issues on a scatter plot, ranging from 40% to 80% on the Y-axis for how serious the concern is. Notable points include "LGBTQ+ rights and equality" at around 55%, "Gender equality" at about 60%, "Refugee children's safety" near 60%, "Children's access to good education" slightly over 60%, "Children's mental health and wellbeing" near 65%, "Children's safety online and digital privacy" close to 93%, and "Prevention of child exploitation" around 75%. Data sourced from More in Common survey of 2,073 US parents conducted in June 2025.
  • View organization page for Thorn

    34,383 followers

    The crisis of AI-generated child sexual abuse material (AIG-CSAM) is only becoming more urgent as the technology advances. The people trying to stop it need appropriate legal protections to stay ahead of perpetrators Thorn has recently been featured in Tech Policy Press on the legal roadblocks slowing down efforts to make AI safer for kids. Many big tech companies have committed to following our Safety by Design principles, which include red teaming for AIG-CSAM. We believe red teaming is one of the most important tools developers have to prevent harm. But without clear legal protections, the teams doing this work are operating under a cloud of risk. Even the solution involves some challenging navigation. Read the article to see what’s at stake: https://lnkd.in/dQBEa6FX

Affiliated pages

Similar pages

Browse jobs

Funding

Thorn 2 total rounds

Last Round

Grant

US$ 345.0K

See more info on crunchbase