Brandeis Marshall is Founder and CEO of DataedX Group, a data ethics learning and development agency for educators, scholars and practitioners to counteract automated oppression efforts with culturally-responsive instruction and strategies. 

Trained as a computer scientist and as a former college professor, Brandeis teaches, speaks and writes about the racial, gender, socioeconomic and socio-technical impact of data operations on technology and society. She wrote Data Conscience: Algorithmic Siege on our Humanity (Wiley, 2022) as a counter-argument reference for tech’s move fast and break things philosophy. She pinpoints, guides and recommends paths to moving slower and building more responsible human-centered AI approaches. 

Brandeis centers her work and impact on making data and AI concepts snackable to understand for practical implementation from the classroom to the boardroom. As co-lead of the Atlanta Interdisciplinary AI Network, Brandeis is developing data citizens through humanities-centered critical data literacy community workshops and supporting new AI researchers, who are grounded in humanistic inquiry. Also Brandeis provides data equity scholarship, professional development and resources as a team member on the NSF Institute for Trustworthy AI in Law & Society (TRAILS), a partnership between the University of Maryland, George Washington University and Morgan State University.  

Brandeis’s thought leadership has appeared in Heinrich-Böll-Stiftung, Medium, OneZero, The Moguldom Nation, and on CNN. Brandeis has spoken to audience across the AI and justice sectors including ACLU, Harvard, Kapor Center, Stanford, Truist, Urban League and Visa.

Brandeis holds a Ph.D. and Master of Science in Computer Science from Rensselaer Polytechnic Institute and a Bachelor of Science in Computer Science from the University of Rochester. She has been a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University, Stanford PACS Practitioner Fellow and Partner Research Fellow at Siegel Family Endowment. She has served as an assistant professor at Purdue University and then associate and full professor at Spelman College.

Topics:

  • With months of intense ‘what can AI do for you’ discussions, op-ed think pieces and podcasts, the fear of job replacement and/or inability to secure a living wages job is running high. People are scared that they will be automated right out of a career, job and ways to support themselves. Data scientist Brandeis Marshall urges you to ask the opposite question: what can’t AI do? In this human-centered talk, Brandeis will counter the argument about the most disruptive and transformative advancement in technology with a focus on what is UnAI-able—actions, tasks and skills that can’t be digitized or automated. Brandeis will outline the three categories of human-driven decision-making competencies that every sector and industry currently needs and will require for the foreseeable future. Audiences will learn why contextual awareness, conflict resolution and critical thinking cannot be replicated with 0s and 1s, and how AI may pivot, not eliminate, your daily duties at work. You will walk away understanding the nuances of the strengths and weaknesses of our digitally-based future and how to remain positive about AI-proofing your career.

  • AI has infiltrated society, growing concerns over its future implications within the education space. AI-assisted tools and platforms, like ChatGPT, are disrupting how instructors are able to assess how much a learner knows and can apply without advanced technological assistance. Whether concerns stem from trying to understand what these tools are capable of or tackling the ethics of students cheating with generative AI assistance, educators and the education community at large (board members, administration and parents) must understand the nuances of these tools. In this talk, Brandeis Marshall discusses the tensions of contending with the AI-assisted tools’ consequences, the rapid pace of AI’s new releases, and learner engagement for all levels of education. Brandeis will share small non-disruptive changes to integrate in the classroom and throughout a course that’ll hold learners accountable. Audiences will learn how to navigate the new era of AI-led learning assistance and how to best both navigate and integrate these tools for better learning outcomes.

  • There’s much buzz about AI, but have you ever thought about its societal impacts from these transformative technologies? Bias is either categorized as a fixable error or a structural inequity. This either-or language should be reframed as a both-and situation. As knowledge construction workers and insight architects, we struggle with seeing and addressing a spectrum of biases. The power and ease of scale of inequities in our digital systems affects the effectiveness of achieving business goals and maintaining client loyalty. During this talk, data equity expert Brandeis Marshall introduces the "bias wheel" as a more practical guardrail to navigate this spectrum. In this talk, she discusses the disparate impacts of oppressive algorithms, including questioning the trust of and trustworthiness in our data, systems and platforms. Audiences will learn why we should all care about mitigating biases, even for those not in technical roles, and how to continue conversations around solutions (policies and procedures) to help mitigate bias and discrimination.

  • The ‘move fast and break things’ standard operating practices are in hyperdrive. Each industry across all sectors are trying their best to implement large language models (think Siri, Alexa, GPT) in some areas within their business operations. There’s a frenzy and serious FOMO (fear-of-missing-out) on doing this to maintain a competitive advantage. But instead of building bigger, faster, organizations must shift strategies and slow down and fix things.

    In this talk, Brandeis Marshall identifies the urgent need for more robust transparency, accountability and governance guardrails in the data pipeline. Brandeis highlights ways that discrimination shows up in the digital data space and offers suggestions to address it. Audiences will learn the perils of tech’s dominant philosophy of “move fast and break things” and why it should be replaced with a renewed focus on data equity-based strategies and frameworks.

  • Humanizing our data operations is possible when the data/tech community is collectively pushing for data transparency, making data/AI systems and knowledge workers accountable while exercising our data citizenship and agency. In this talk, data equity expert Brandeis Marshall describes the ethical gaps in the data pipeline and shares practical interventions to making data/AI projects more equitable without disrupting existing data operations. Finding common overlaps amongst data policies, practices and products are key to building a holistic approach across multiple departments for sustainable equitable data practices. Data equity strategist Brandeis Marshall will show your data practitioners how to operate from an ethically-strong position and expand your data equity deployment options for organizational improvement.


Twitter: @csdoctorsister

Instagram: @csdoctorsister