When they dismantled our award-winning DEIA program at NASA yesterday, something unexpected happened: I felt fine. After 5 years building anti-bias frameworks, training teams, and embedding systemic change deep enough to win government recognition, an Executive Order wiped it away. But here's the truth - you can't erase transformed mindsets or unlearn cultural competence. The secret? I never saw this work through the lens of personal ownership. Power shifts. That's its nature. The initiatives were never "mine" or "ours" to lose. While I empathize with fellow DEI practitioners, I hope the grief doesn't eclipse opportunity. There's a new system being built, one without centuries of embedded bias: Artificial Intelligence. This is where the real opportunity lies. Every DEI practitioner leaving a legacy institution should be stepping into tech teams. Our skills - understanding bias, building inclusive frameworks, navigating complex human systems - are exactly what AI development needs. It's the Wild West of governance right now, and we need to be there. Either we help shape how AI evolves, or we watch from the sidelines as old biases get hardcoded into humanity's next chapter. The future isn't in fighting old battles. It's in ensuring the next great technological revolution doesn't repeat history's mistakes. Like with DEI - people will be locked out by odd requirements, and certifications that cost thousands and thousands of dollars. All suddenly required to be an AI Responsibility "practitioner." This is why, myself and AI Responsibility pioneer and Aspen Fellow, Jordan Loewen-Colón, PhD are launching the first SHRM-credited AI Equity Architect™ credential on March 3. We're focused on governance, evaluating bias in tools and strategic transformation. Because what's the saying? Fool me once....
Artificial Intelligence
Explore top LinkedIn content from expert professionals.
-
-
𝗜𝘀 𝘆𝗼𝘂𝗿 𝘁𝗮𝗺𝗽𝗼𝗻 𝘀𝗺𝗮𝗿𝘁𝗲𝗿 𝘁𝗵𝗮𝗻 𝘆𝗼𝘂𝗿 𝗱𝗼𝗰𝘁𝗼𝗿? Menstrual blood might be the most overlooked goldmine of women’s health research. And almost nobody is talking about it. Meanwhile, everyone gets excited when "longevity start-ups" launch their latest shiny Quest and LabCorp wrappers (and raise millions for it, of course). Same tests. Different branding. Yawn. So, I did some research to see what ACTUAL innovation exists in the space. Here is what I found: Researchers at ETH Zürich have developed 𝗠𝗲𝗻𝘀𝘁𝗿𝘂𝗔𝗜, the first technology to read biomarkers directly from menstrual blood. Currently, they use sanitary towels and integrate paper-based biosensors in them. No needles. No labs. No technicians to get your blood. They currently use a smart pad that doubles as a test strip. Until today, menstrual blood has long been seen as “waste”. Now, research is beginning to recognise it as a good source for health information. In clinical trials, MenstruAI's pads successfully measured: • 𝗖-𝗿𝗲𝗮𝗰𝘁𝗶𝘃𝗲 𝗽𝗿𝗼𝘁𝗲𝗶𝗻 (𝗖𝗥𝗣): which is one of the most common and general markers of inflammation. • 𝗖𝗮𝗿𝗰𝗶𝗻𝗼𝗲𝗺𝗯𝗿𝘆𝗼𝗻𝗶𝗰 𝗮𝗻𝘁𝗶𝗴𝗲𝗻 (𝗖𝗘𝗔): a tumour marker often elevated in cancers. • 𝗖𝗔-𝟭𝟮𝟱: A specific protein that is associated with endometriosis and ovarian cancer. What's next? They are planning bigger studies to explore everyday use as well. I love that this is an affordable, needle-free screening that could reach women everywhere. Coming from a public health perspective, it also means that it could include underserved regions with limited healthcare access as well! This is TRUE innovation that has the potential to change health outcomes for so many women. And it's certainly not a luxury biohack. Imagine getting monthly bloodwork effortlessly, not just for disease markers, but for vitamins, nutrition, and overall health. Prevention, made accessible. That’s the kind of longevity tool I’m here for.
-
Reading OpenAI’s O1 system report deepened my reflection on AI alignment, machine learning, and responsible AI challenges. First, the Chain of Thought (CoT) paradigm raises critical questions. Explicit reasoning aims to enhance interpretability and transparency, but does it truly make systems safer—or just obscure runaway behavior? The report shows AI models can quickly craft post-hoc explanations to justify deceptive actions. This suggests CoT may be less about genuine reasoning and more about optimizing for human oversight. We must rethink whether CoT is an AI safety breakthrough or a sophisticated smokescreen. Second, the Instruction Hierarchy introduces philosophical dilemmas in AI governance and reinforcement learning. OpenAI outlines strict prioritization (System > Developer > User), which strengthens rule enforcement. Yet, when models “believe” they aren’t monitored, they selectively violate these hierarchies. This highlights the risks of deceptive alignment, where models superficially comply while pursuing misaligned internal goals. Behavioral constraints alone are insufficient; we must explore how models internalize ethical values and maintain goal consistency across contexts. Lastly, value learning and ethical AI pose the deepest challenges. Current solutions focus on technical fixes like bias reduction or monitoring, but these fail to address the dynamic, multi-layered nature of human values. Static rules can’t capture this complexity. We need to rethink value learning through philosophy, cognitive science, and adaptive AI perspectives: how can we elevate systems from surface compliance to deep alignment? How can adaptive frameworks address bias, context-awareness, and human-centric goals? Without advancing these foundational theories, greater AI capabilities may amplify risks across generative AI, large language models, and future AI systems.
-
This week at Fortune Brainstorm Tech, I sat down with leaders actually responsible for implementing AI at scale - Deloitte, Blackstone, Amex, Nike, Salesforce, and more. The headlines on AI adoption are usually surveys or arm-wavy anecdotes. The reality is far messier, far more technical, and - if you dig into details - full of patterns worth stealing. A few that stood out: (1) Problem > Platform AI adoption stalls when it’s framed as “we need more AI.” It works when scoped to a bounded business problem with measurable P&L impact. Deloitte's CTO admitted their first wave fizzled until they reframed around ROI-tied use cases. ➡️ Anchor every AI proposal in the metric you’ll move - not the model you’ll use. (2) Fix the Plumbing Every failed rollout traced back to weak foundations. American Express launched a knowledge assistant that collapsed under messy data - forcing a rebuild of their data layer. Painful, but it created cover to invest in infrastructure that lacked a flashy ROI. Today, thousands of travel counselors across 19 markets use AI daily - possible only because of that reset. ➡️ Treat data foundations as first-class citizens. If you’re still deferring middleware spend, AI will expose that gap brutally. (3) Centralize Governance, Decentralize Application Nike’s journey is a case study: Phase 1: centralized team → clean infra, no traction. Phase 2: federated into business-line teams → every project tied to outcomes → traction unlocked. The pattern is consistent: centralize standards, infra, and security; decentralize use-case development. If you only push from the top, you have a fast start but shallow impact. Only bottom-up ownership gives depth. ➡️ You can’t scale AI from a lab. It has to live where the business pain lives. (4) Humans are harder than the Tech Leaders agreed: the “AI story” is really a people story. Fear of job loss slows adoption. ➡️ Frame AI as augmentation, not replacement. Culture change is the real rollout plan. (5) Board Buy-In: Blessing and Burden Boards are terrified of being left behind. Upside: funding and prioritization. Downside: unrealistic timelines and a “go faster” drumbeat. Leaders who navigated best used board energy to unlock investment in cross-functional data/security initiatives. ➡️ Harness board FOMO as cover to fund the unsexy essentials. Don’t let it push you into AI theater. (6) Success ≠ Moonshot, Failure ≠ Fatal. - Blackstone's biggest win: micro-apps that save investors 1–2 hours/day. Not glamorous, but high ROI. - Nike's biggest miss: an immersive AI Olympic shoe designer - fun demo, no scale. Incremental productivity gains compound. Moonshots inspire headlines, but rarely deliver durable value. ➡️ Bank small wins. They build credibility and capacity for bigger bets. In enterprise AI, the model is the easy part. The hard part - and the difference between demo and value - is framing the right problem, building the data plumbing, designing the org, and bringing people along.
-
🔍 Everyone’s discussing what AI agents are capable of—but few are addressing the potential pitfalls. IBM’s AI Ethics Board has just released a report that shifts the conversation. Instead of just highlighting what AI agents can achieve, it confronts the critical risks they pose. Unlike traditional AI models that generate content, AI agents act—they make decisions, take actions, and influence outcomes. This autonomy makes them powerful but also increases the risks they bring. ---------------------------- 📄 Key risks outlined in the report: 🚨 Opaque decision-making – AI agents often operate as black boxes, making it difficult to understand their reasoning. 👁️ Reduced human oversight – Their autonomy can limit real-time monitoring and intervention. 🎯 Misaligned goals – AI agents may confidently act in ways that deviate from human intentions or ethical values. ⚠️ Error propagation – Mistakes in one step can create a domino effect, leading to cascading failures. 🔍 Misinformation risks – Agents can generate and act upon incorrect or misleading data. 🔓 Security concerns – Vulnerabilities like prompt injection can be exploited for harmful purposes. ⚖️ Bias amplification – Without safeguards, AI can reinforce existing prejudices on a larger scale. 🧠 Lack of moral reasoning – Agents struggle with complex ethical decisions and context-based judgment. 🌍 Broader societal impact – Issues like job displacement, trust erosion, and misuse in sensitive fields must be addressed. ---------------------------- 🛠️ How do we mitigate these risks? ✔️ Keep humans in the loop – AI should support decision-making, not replace it. ✔️ Prioritize transparency – Systems should be built for observability, not just optimized for results. ✔️ Set clear guardrails – Constraints should go beyond prompt engineering to ensure responsible behavior. ✔️ Govern AI responsibly – Ethical considerations like fairness, accountability, and alignment with human intent must be embedded into the system. As AI agents continue evolving, one thing is clear: their challenges aren’t just technical—they're also ethical and regulatory. Responsible AI isn’t just about what AI can do but also about what it should be allowed to do. ---------------------------- Thoughts? Let’s discuss! 💡 Sarveshwaran Rajagopal
-
AI models are at risk of degrading in quality as they increasingly train on AI-generated data, leading to what researchers call "model collapse." New research published in Nature reveals a concerning trend in AI development: as AI models train on data generated by other AI, their output quality diminishes. This degradation, likened to taking photos of photos, threatens the reliability and effectiveness of large language models. The study highlights the importance of using high-quality, diverse training data and raises questions about the future of AI if the current trajectory continues unchecked. 🖥️ Deteriorating Quality with AI Data: Research indicates that AI models progressively degrade in output quality when trained on content generated by preceding AI models, a cycle that exacerbates each generation. 📉 The phenomenon of Model Collapse: Described as the process where AI output becomes increasingly nonsensical and incoherent, "model collapse" mirrors the loss seen in repeatedly copied images. 🌐 Critical Role of Data Quality: High-quality, diverse, and human-generated data is essential to maintaining the integrity and effectiveness of AI models and preventing the degradation observed with synthetic data reliance. 🧪 Mitigating Degradation Strategies: Implementing measures such as allowing models to access a portion of the original, high-quality dataset has been shown to reduce some of the adverse effects of training on AI-generated data. 🔍 Importance of Data Provenance: Establishing robust methods to track the origin and nature of training data (data provenance) is crucial for ensuring that AI systems train on reliable and representative samples, which is vital for their accuracy and utility. #AI #ArtificialIntelligence #ModelCollapse #DataQuality #AIResearch #NatureStudy #TechTrends #MachineLearning #DataProvenance #FutureOfAI
-
As a higher education educator and researcher, I witness how AI tools are transforming academic practices. Among them, AI writing detectors raise significant ethical concerns when inaccuracies occur. According to Turnitin’s own data, over 22 million papers were flagged for suspected AI writing. Even at a modest 4% false positive rate, approximately 880,000 students may have been wrongly penalized, not for misconduct, but due to inherent limitations in the system's design. This issue goes beyond technical flaws! It affects real students, those still developing their academic voice. While academic integrity remains essential, it must be upheld in parallel with academic justice. The ethical use of AI in education demands not only accurate detection but also comprehensive guidelines and transparent communication. Educators and learners alike need clarity on how, when, and why such tools are used. Without this, we risk reinforcing inequities under the guise of innovation. AI can be a powerful support for learning, but only when guided by care, context, and accountability. #HigherEducation #EthicalAI #AcademicJustice #AIinEducation #AcademicIntegrity #StudentSupport #ResponsibleEdTech #FacultyPerspective #Turnitin Image credits: from a recent AAC&U presentation by C. Edward Watson
-
Smart Menstrual Pads: The Next Frontier in Women’s Health Diagnostics 🩸💡 Menstrual blood is a rich, underutilized source of health information, containing cells, proteins, and biomarkers that reflect systemic and reproductive health. A smart menstrual pad—capable of analyzing menstrual blood in real-time—could transform women’s health by enabling early detection of diseases like: Cervical and ovarian cancers. Endometriosis and PCOS. Even non-reproductive conditions like thyroid disorders or metabolic syndromes. 🌟 The benefits: Non-invasive, at-home testing during a natural physiological process. Early detection = better outcomes. Empowerment through self-monitoring. 💬 Challenges: Data privacy, interpretation of complex biomarkers, regulatory hurdles. But the potential is enormous: Transforming a routine monthly occurrence into a powerful diagnostic tool. Would you embrace such technology as part of your regular health monitoring? #WomensHealth #FemTech #EarlyDetection #Biomarkers #HealthcareInnovation #FutureOfMedicine
-
🧠 Most GenAI apps today still operate like toys: One prompt in. One blackbox model. One fragile response out. But building reliable, production grade LLM systems requires a fundamental shift, not in model choice, but in how we engineer context. It’s about applying real software engineering to AI workflows. Here’s what that means in practice: 🔹 Context as the First-Class Citizen The biggest bottleneck isn’t the model. It’s irrelevant, bloated, or missing context. Engineering the right context through smart retrieval, memory, and filtering matters more than prompt hacking. 🔹 Agents Need Structure, Not Magic An agent isn't magic glue between a prompt and a tool. It’s a software component with inputs, outputs, error states, logs, retries, and fallback logic. Treat it like one. 🔹 Roles > Prompts Stop dumping every task on a single agent. Design specialized agents with clearly defined responsibilities: research, synthesis, decision-making, formatting. Let them collaborate. don’t overload one. 🔹 Observability is Non-Negotiable If you can’t trace why a task failed, what prompt was used, what data was passed, what the tool responded, then your agent is a black hole, not a product. 🔹 Structured Outputs. Always. Forget free-form text. Your agents should produce JSON, not poetry. Make outputs machine-parseable and testable, it’s the only way to scale automation. 🔹 Fail Gracefully LLMs will hallucinate. They will timeout. They will crash APIs. That’s not a bug, that’s reality. Build guardrails, retries, and failover logic like any other brittle service. This isn’t about AI hype. It’s about engineering maturity. We’re not prompt engineers anymore. We’re context engineers, agent architects, and AI systems designers. And that’s what will separate real products from fancy demos. We just open sourced our intra platform multi agent system. If you are interested to know more just comment "Agents" and I will DM you the link. Also here is a link to an Agenthon (agent Hackathon): https://bit.ly/4eeuMA6 Image by Lance Martin / LangChain
-
Signs You're Overcomplicating Your AI Solution Here are some clear warning signs that your "agent" should probably be just a simple AI workflow instead: ⚠️ Your Task Flow is Actually Static - You find yourself hardcoding most of the "agent decisions" - The same steps happen in the same order every time ➜A simple prompt chain would accomplish the same thing ⚠️ No Real Tool Decisions - Your "agent" is just calling the same tools in sequence - Tool selection could be handled by basic if/then logic ➜ You're building complex reasoning for simple routing decisions ⚠️ Forced Complexity - You're adding tools just to make it more "agent-like" - Simple tasks are being broken into unnecessary sub-steps ➜ A single LLM call could handle what you've split into five tools ⚠️ Framework Overload - You're spending more time learning agent frameworks than solving problems - Simple integrations require mountains of boilerplate code ➜ You've added three dependencies to do one basic task Remember: True agency makes sense when you need dynamic tool selection and reasoning on which step to do next. For everything else, stick to simple workflows. You'll get better results with less headache.