In today’s fast-paced world, shaping the future is essential for you to stay ahead of technological advancements. Building for a new world while examining their ethical implications is no small feat. We are going to have to be intentionally shaping the future. And AI, along with all the new technologies, are going to reshape business for the future.My job on this podcast is to introduce you to the people who are not just talking about these changes but actively shaping them. Today, I’m thrilled to have Emily Springer PhD with us, who embodies this idea of thinking, feeling, seeing—and most importantly, doing. Dr. Springer deeply understands AI, ethics, and how these shape our lives, but she goes beyond that to help us understand how to harness AI for good.
Shaping the Future
As Dr. Springer emphasized, more than learning about AI is required. You have to take action, and that’s where the power of change comes in. Change is challenging. As a corporate anthropologist, I’ve seen firsthand how difficult it is for organizations and individuals to embrace it. But the future is moving quickly, and those unwilling to take action will be left behind. What excites me most about our conversation today is Dr. Springer’s focus on inclusive AI—an approach that prioritizes equity, social justice, and improving lives. This philosophy goes hand-in-hand with our work, where I help organizations navigate transformation. Emily, too, helps people understand the often-intimidating world of AI, and she does so in ways that are accessible, not shrouded in technical jargon.
To watch Dr. Springer’s podcast on YouTube, click here:
Who is Emily Springer PhD
Let me tell you a little more about Dr. Springer. She was recently named one of the 100 brilliant women in AI ethics for 2024, a well-deserved recognition highlighting her commitment to making AI smarter, kinder, fairer, and more inclusive. She is an expert on UNESCO’s Women4Ethical AI platform, bringing her expertise to help build a more equitable future. Emily is also the founder of TechnoSocio Advisory, an organization dedicated to inclusive AI consulting, working to develop policies and programs that prioritize social good and improve livelihoods.
Making the Future an Understandable Reality
And what makes her approach so compelling? It’s that she’s grounded in reality. While many focus on the technological marvels AI can create, Emily’s mission is to understand what it means for the people who will live with these innovations daily. Her company, TechnoSocio Advisory, isn’t about pushing AI forward—it’s about making sure AI serves the people it’s meant to help, especially regarding diversity, equity, and inclusion. We cannot ignore this as AI touches more and more parts of our lives.
One of the most exciting things about Dr. Springer’s work is her Inclusive AI Lab. This lab teaches algorithmic literacy, making AI understandable for everyday professionals without the need to become coders or mathematicians. For those of us who use AI regularly, this is incredibly refreshing! Most of us are more interested in how AI can help us solve real-world problems, not in the technical intricacies behind it. Emily’s focus is on making sure everyone, from seasoned professionals to those just entering the workforce, can use AI for good in a practical and relatable way—and that means understanding both its strengths and its limitations.
The Rapid Rise of ChatGPT
During our conversation, we touched on the rapid rise of tools like ChatGPT and how they reshape communication, work, and even our sense of belonging. As humans, we’re wired to want connection. We’re herd animals by nature, seeking others to form communities and relationships. Dr. Springer’s work explores how AI can be used to automate tasks or make processes faster and enhance these human connections in meaningful ways. This is particularly exciting: the potential for AI to contribute to a world where people feel more connected, included, and empowered. You can see how AI and the fourth Industrial Revolution is shaping the future.
And it’s not just about the technology. Emily’s experience spans across continents, cultures, and communities. She spent over six years working in Ethiopia, developing cultural and linguistic fluency while challenging the conventional approaches to international development. Her philosophy, one that has carried over into her AI work, is that solutions must be tailored to the local context. It’s not about finding a one-size-fits-all answer but rather about understanding what people in specific places truly need and helping them find ways to achieve it. This kind of thinking is crucial when we consider the global impact of AI—technology cannot be implemented in the same way everywhere. It needs to be flexible, adaptable, and, most importantly, inclusive, and Dr. Springer’s work is a testament to this. She is really trying to help others by shaping the future.
Applying Socio-Technology to Social Justice
Emily’s passion for social justice and inclusion started growing up in Minneapolis in a diverse and multicultural environment. This early exposure to different communities and perspectives has been a driving force in her career, helping her see the big picture: that technology when used ethically and inclusively, can bring about real social change. This background shaped her views and work today, ensuring that AI is a tool for equity and social good rather than just another product of the Fourth Industrial Revolution.
Our conversation with Dr. Springer today is both timely and essential. As AI becomes more integrated into our daily lives, it’s critical that we ask the right questions. How do we ensure that AI enhances our sense of belonging rather than alienating us? How can we ensure these technologies benefit everyone, not just a select few? And perhaps most importantly, how do we use AI to create a more just, equitable, and inclusive world?
These are the challenges Dr. Springer is tackling head-on, and I’m so glad she could join us today to share her insights. So, as you listen, I encourage you to think about the role AI is playing in your own life, organization, and community. Are you ready to embrace it for good? As Emily reminds us, seeing, feeling, and thinking are only part of the journey—it’s time to take action.
For more information about Dr. Emily Springer, check out these links:
On LinkedIn: Emily Springer PhD
Her website: https://technosocioadvisory.com/
Some other podcasts to listen to:
Lisen Stromberg: Lisen Stromberg: How Intentional Power Takes you from Control to Significance
Laura Grondin: Navigate Tradition and Innovation in Business With Laura Grondin
Additional resources for you
- My two award-winning books: Rethink: Smashing The Myths of Women in Businessand On the Brink: A Fresh Lens to Take Your Business to New Heights
- Our new book, Women Mean Business: Over 500 Insights from Extraordinary Leaders to Spark Your Success, coauthored with Edie Fraser and Robyn Freedman Spizman
- Our website: Simon Associates Management Consultants
- A Business Change Management Blog: Why Blue Ocean Strategy® is Essential for Businesses to Grow in 2024?
From Observation to Innovation,
CEO | Corporate Anthropologist | Author
Simonassociates.net
Info@simonassociates.net
@simonandi
LinkedIn
The entire transcript is below:
Andi Simon: Welcome to On the Brink with Andi Simon. Welcome our audiences, whether you’re viewing us or you’re listening. I’m Andi Simon, as you know, I’m your guide and your host. My job is to bring you people who are going to help you see, feel, and think in new ways and then do. And today’s guest, Dr. Emily Springer, is going to help you do just that. And as she said to me, it’s more than just seeing and feeling and thinking. You also have to do something. And I always leave out the do because that’s your job as the audience. Let me tell you a little about Emily. This is going to be a rich and delicious conversation today because Dr. Springer, Emily, is a sociotechnical person, and I’m an anthropologist that specializes in helping organizations and the people inside change. That’s what a corporate anthropologist does. And after 22 years in business, I will tell you, change is painful, but it’s coming in fast. Who’s Dr. Springer? Emily is recognized as one of 100 brilliant women in AI ethics, this is from 2024. That’s why I’m so excited to have her here. I’m just excited to have her here. She serves as an expert on UNESCO’s women for ethical AI platform, and she founded TechnoSocio Advisory an inclusive AI advisory company. She’s a responsible AI advisor focused on building AI policies, processes and programming with the aim of improved livelihoods and social justice. It isn’t just about the fourth Industrial Revolution, what does this mean for all of us who are going to be living through it? This is really interesting. Her AI advising go to many clients data to Xe Athena Informatics. But she also has something wonderful, the inclusive AI lab that she runs, an algorithmic literacy training for the everyday professional and folks like us that builds an understanding of AI using concepts and tools instead of math or coding, which is cool because I use AI all the time and I’m not terribly interested in how they do it. I’m just excited that I can use it, and I don’t know whether it’s true or false, but the whole world is becoming a ChatGPT world. This is wonderful because the premise she has is AI for good social impact and diversity. Equity and inclusion are very important. And I’m doing some research on how belonging and inclusion are being impacted by AI. Because humans are herd animals, they want to belong. And how does this new technology begin to enhance that, not simply threaten it? I want to look at the positive. There’s always a challenge. She focuses on tech and social good, but most interestingly, she’s trained and mentored graduate students in development practice and social justice programs and undergraduates in technology, society, and globalization. Emily, thank you for joining me today. It’s truly a privilege and a pleasure and I’m so glad we’re together.
Dr. Emily Springer: Thanks so much, Andi. It’s an honor to be here with you today.
Andi Simon: Dr. Springer brings to you something so rich. But I’m going to ask you to talk about her journey so that you can put in context what she’s doing now and why it’s so meaningful. Who is Emily?
Dr. Emily Springer: Great. Thanks so much, Andi. So, yeah, a lot of different sorts of confluence of factors have allowed me to arrive at where I am today. So, for those of you who are listening, as Andi mentioned, I’m an inclusive AI consultant, and this is really something that I’m trying to get off the ground. It’s not as if this is something that’s been going on for years. I almost need to help companies and organizations and teams understand that they may need inclusive AI advising. And so that is really interesting. You have to help people really realize that they might need a service that they don’t know that they need. And so how I sort of came to be in this spot of building and setting up this business, essentially, I was trained as an interdisciplinary sociologist, and I have always focused on international development. And so, I have spent a large portion of my life over six years living and working in Ethiopia, which is a country that I hold very near and dear to my heart. It’s a place that I have worked at building cultural and linguistic fluency. If you have any contact with international development, you probably know that it’s really interested in taking something that works in a single location and scaling it, and I always have kind of gone against the grain on that. I believe that especially if we want to work on improving livelihoods, we need to be thinking about how people in a single location define their own problems, what they see as a problem, what they want to solve, what they actually want. So, for me, development is something that is very locally situated. And so, this question of scale is always something that I have, sort of grappled with. And it comes back up in the AI world. So really there’s a strong sort of focus on people, organizations and how we can create and instill sustainable change in everyday people’s lives around the world. that sort of I’m going to go way back in time. As a kid, I actually grew up in Minneapolis. I went to Minneapolis public schools, and at the time that I was growing up, there was actually a busing program going on.
And Minneapolis is the recipient of different waves of immigrant and refugee populations. And I really got to benefit from that. So as a young child, my classmates were Hmong and Vietnamese, and in high school, my classmates were Aroma, which is a group, an ethnic group that lives in southern Ethiopia and Somali. And so, I have had this sort of multicultural, experience ever since I was a little kid. And I think that that’s really informed how I think about AI and really what’s at stake. So, I’ll pause there.
Andi Simon: Well, you know, I’ve been to Africa three times. I was going to go for my fourth time when Covid hit, and we’ve been to South Africa, Botswana, Zambia, Zimbabwe, Kenya intensity. I’ve not been to Ethiopia, but as an anthropologist, I am endlessly fascinated by the creativity of humans. And to your point, what works in one place doesn’t work at all in another. And if you don’t build it from the mindset and the stories of the people who you’re trying to help in some fashion, you miss the whole point. It isn’t about what you bring, it’s what they need. And as a corporate anthropologist, I learned that right out of the starting gate. You know, when I said to people, I’m a corporate anthropologist, they rolled their eyes. I said, what do you need? I need to change. Oh, I can help you. And they didn’t have any idea what I was going to do, but it wasn’t what I did that they needed. So, let’s go back, though. After you worked in Ethiopia and you had this moment about how it wasn’t about scaling, it was about meaning and meaningfulness. Was that with AI or was that with something else you were working on?
Dr. Emily Springer: Yeah, that’s a great question. Actually. I got really interested in numbers. So, in international development, generally large donors set up performance metrics to measure their investment in a particular project or program. And so, you end up with monitoring and evaluation systems that I wanted to understand what’s actually happening as a result of these monitoring and evaluation systems. And so, I started studying essentially how people react and respond to metrics. So, if you’re a student at university, you might want a good grade. And so that idea of a quantitative metric is actually going to change your behavior. And so, I really started studying the social processes of quantification. And I looked at that from rural projects in the field and followed those up through sites of aggregation and said what’s happening at each site that people are touching these numbers, thinking about these numbers, creating documentation. And that actually is an amazing lead into AI because it helps you think really critically and thoughtfully about what these numbers mean, what’s not in the numbers, what’s behind the data set, what is the social context of this data set, and how that then might influence what actually shows up later on down the line if an AI model uses that data. So, there was sort of this interim period where I was really interested in numbers, and I looked at my dissertation research, and was looking at that with respect to women’s empowerment, which is something that I think is very localized and socially embedded. And yet when you quantify it, you have to define it. And so, we have this circumstance where very well-intentioned people, in the global metropole, so Washington DC, London, places of historical power, basically get to define what women’s empowerment is and then push that indicator out around the world. And there are lots of really good reasons why you need to do that. So, I’m not saying we shouldn’t measure. We do need data, but I think we need to be a lot more thoughtful about the social processes that happen around those numbers and sort of take that into account and perhaps identify moments where local communities could play a role in determining what metrics they’re measured by.
Andi Simon: Yes. It’s so interesting. In my first anthropology course many years ago, as an undergraduate student, I heard out of context data has no meaning. And what you’re saying is that the metrics were necessary for the funders and for those in power who want to demonstrate that their activities are doing good, what they expect. But for the people who are being done to, they may have no meaning at all. They may be out of context. They may be defined differently. I recently did some research for a senior living community. I had two of them out in Oregon, and they asked me to come in and evaluate what quality really meant to the people who were living in their centers, because the quality data that they were using basically said they were all threes. They couldn’t figure out why. They really were just plain old average. It didn’t matter whether it was a high end one or a low-end zone. So, what did quality really mean? And the woman who hired me was the CEO of 20 of these facilities. And she said, why don’t you come and spend some time with me and listen to what they think is quality? Oh, my goodness. And as I recorded them and transcribed them, she said, well, this has nothing to do with what we are capturing. I said, no, but it is what they’re telling you and it is defined in the eyes of the beholder. But her whole world was being defined by one. And they’re living in a whole different one that has nothing to do with each other. You and I have such interesting pasts, which is why this is so much fun. So, after you came back from. When you left Ethiopia with this world, this real understanding of the data, how did you migrate then into the next stage in your life? I’m not going to say where you are now, but how are you moving along?
Dr. Emily Springer: Yeah, absolutely. I want to respond to that. But really quickly. I just want to say something about the meaning of data. So, if the meaning of data is at one location, it means a lot. I wouldn’t say it doesn’t hold meaning for other people. It holds a different meaning. And in the case of numbers and metrics in international development, what I found was that to the people on the ground who have to work with these numbers and have to deliver against them, it felt ridiculous. It felt silly, it felt like it was pulling them away from the really good local work that they wanted to do. It meant that their local knowledge didn’t have a place. It silenced them. It changed their entire orientation to the work. And so, there are ways in which it takes on entirely different meanings. And that’s why people talk about in international developed metrics as colonialism. And so, they are all like it has heavy meaning. It’s just not the same meaning.
Andi Simon: That’s beautiful. I’m so glad you made them. Because that then leads us into the use of data by everybody, you know. Right.
Dr. Emily Springer: Absolutely, absolutely. Yeah. So certainly, you know, I’ve been in the field like in the field. I put that in quotations. It’s a very colonial language in and of itself. So, I have been living and working with Ethiopians and loving life and I actually went to a conference in Washington, D.C., and everybody was talking about blockchain technologies and I felt so disjointed from what people are thinking and feeling and experiencing in Ethiopia. And to that point, just a couple of weeks ago, I was at the United Nations for a conference, and I had the privilege to be chatting with this Egyptian Parliament member. And I said, oh, what are Egyptians thinking about AI? She grabbed my arm and she said, we’re thinking about eating.
Dr. Emily Springer: Great moment for me because it helped me realize that I had come from Ethiopia to this conference and been like, there is this massive disconnect happening, and here I am years later, and even though I have all this rich experience and I’m constantly trying to represent and think through what farmers would think about this I project or that I project here, I was becoming part of the apparatus, and I think we all need to be thinking about that every day as we get up and go to our workplaces. What do we want to be contributing to? How are we becoming increasingly socialized by the people, the processes, the policies around us we can never, ever forget, especially like I think about AI in a global context. We can never forget that in our experience, many people who listen to this podcast have access to technology. You are in the global minority. Our experience, our shared like hyper technological experience is the minority. This is not what people around the world are thinking about. Feeling like this is. This is not their reality and we need to really keep that in mind if we want to figure out ways to make it work for everybody.
Andi Simon: Well, I can visualize that Egyptian woman grabbing your arm and quietly saying to you, we’re thinking about eating because I mean, talk about this but there has always been the disconnect. You know, it’s about being in rural America, having access to the internet so you can do ebusiness and not having it all disconnected. I mean, we are not exactly equitable in our access to all the things that technology can provide within this own country, you know? And I love blockchain but it isn’t as if it is equitably available. And how it’s used or abused is going to be a serious conversation. What are you as a leader in this field? You’re training people. You know, your lab is doing all kinds of things. What are you doing and what are the priorities that you and the folks you work with are thinking about because you are an early agent of transformation and information. I don’t know what you said to the Egyptian, but I’m sure you didn’t say, oh, but you need to eat.
Dr. Emily Springer: No. I don’t believe that which is interesting because I also work, I’m kicking off research projects around what our right uses. What are appropriate use cases for AI in the agricultural development space? So, I’ve worked a lot with smallholder farmers and sort of looking at women’s empowerment and livelihood improvements in agricultural development.
So that one that’s also why that one hit me, hit me so hard. So, I think we all need to really pause and think a lot more critically about AI. AI is such a power game and we have an opportunity. We are alive at the moment in history when humanity adopts AI. We’re not this sort of superintelligence yet. We’re not general AI. But you know, we’ve been watching these sci fi movies for years and they’ll reference things like, oh, you know, 500 years ago when the robot wars happened, you know, stuff like that. And that’s not to say I don’t want to focus on what could happen tomorrow. There are very real risks happening today and they are not evenly distributed. And that’s what I focus on. But I say that to just highlight that industrial ages are moments when everything is changing. New norms are getting created, new policies, regulations, laws. We are deciding right now how as humanity, we want to relate to AI. And I really think this is a key moment to make sure that you understand what’s happening.
I think literacy is very low, and I think something dangerous is happening right now, which is there are a lot of people who don’t know what AI is, but they’re talking about it like they do. We are at the top, top, top of the hype cycle. And as any anthropologist or sociologist knows, once you create a culture, once you create an understanding, then you have to deal with that. You can’t easily go back. If something is incorrect, you have to then deal with that. and so, I would really encourage people to work on building up AI literacy not only as an individual, but as a citizen around the world. We’re going into democratic elections, and we have misinformation and disinformation, deep fakes. We need to understand what those are so that we can be thoughtful, voting citizens in our workplaces. We are having our AI, a lot of workplaces that I’m aware of, are really interested in using AI in different ways. And so, you’ve got CEOs pushing like, how are we going to use AI? And I would say that if we can push back in teams, in organizations and to say, hey, let’s really get this, let’s, let’s try to get this right. This is a key moment. It’s not an opportunity to just jump on the bandwagon and go. What we’re seeing and what I think is a huge problem, is that corporations are deploying models that actually haven’t been well tested, they haven’t been well thought about. And so, in the tech sector, you talk a lot about minimum viable products. And I think the bar has been set so low for AI builds. And what constitutes a minimum viable product. They’re just releasing things that have very disproportionate outcomes by race, by gender. And so, we need to raise that bar. And I loved, I think it was just last week, Lisen Stromberg on your podcast was talking about history, we’re switching from shareholder value in corporations to stakeholder value and thinking about that sort of relational piece and what companies are delivering to society. Where is that conversation in AI? That whole shift that Lisen just beautifully identified is lost in the AI world. It’s getting regressed all the way back to sort of just as total, shareholder focus. And I think we all need to be saying, but we want a stakeholder model. We know that profit isn’t everything. And so, I think we have this sort of moment where capitalism is going to pull this forward and it’s going to regress back to shareholder value. And I think it’s a great moment where people can seize the day. And in every interaction and every team meeting in every way you can, we need to build the norms. We need to build the world we want to see with AI, and that takes our involvement.
Andi Simon: You know, I’m thinking about AI and Stromberg’s podcast, and for the listeners, it just came out. Today is the 12th. I just posted it. That’s the second time I’ve interviewed her. But she has a model in there on leadership that I want to refer to because it’s so timely. It’s this new generation of leaders, I hate to say the world is in your hands. But if you can think about the initial conversation we had about Ethiopia and the metrics that were coming from colonial powers, from the big guys who had the money and the shift, today, if we can move from the shareholders to the stakeholders, what kind of leaders do we need? And so her research and her book, International Powers is just terrific. But they want to be humble. That’s so interesting. What a great word. They want to be curious. So we don’t really know what we don’t know, but we make up lots of stories about it to make us feel really smart. But we really don’t know what we don’t know. And we want to be empathetic, which is a powerful word so that I can feel your pain because I’m with you. Your brain hates to change, but you’re going to have to. And then they are accountable to the team, not just to some shareholders. There’s something different going on. We’re going to do this. What Emily’s talking about is being accountable to society. You know, taking a responsible role so that I’m not just going to do this because it’s good for my bottom line, but it’s good for us. So we and then there’s resilience. if you look back over the last lifetime, how many changes have we adopted? And how if you look over the last couple of hundred years, you know, if we’re in the fourth industrial revolution, there’s only a couple hundred years of all of this happening. And then there’s transparency and inclusivity. And I don’t think that the Hardy model, that their research is a new leader and, in some ways, Emily, you’re one of those new leaders. You don’t know what you don’t know. And you know that what came through in your learning in Ethiopia was out of sync with I got to eat. I mean, there’s sort of this kind of humility that’s coming, but as you’re looking at your own role as a leader, my hope is that you don’t get frustrated and just go for the bottom line. But, you know, there’s something that’s inspiring you, isn’t there?
Dr. Emily Springer: Yeah, I, I believe very much that we need to be having a lot more conversations about inclusive AI and how to get this right. And so, for those who are listening, I just want to give a simple example. I often say AI has the potential, actually to upend a lot of the social progress that has been made around the world over the last 5000 years. You know, we’ve been working on getting stronger, women’s rights and women’s legislation around race, around religious protections, around ableism and discrimination. All of these sorts of efforts, so much time and energy has gone into those and suddenly power is changing. Power is shifting. We see more women in the workplace. We see more women still not where I’d like it to be, but there’s progress there. I do have the technological capacity to upend that. And not on my watch, not on my watch, not when I can see this coming. So one of the things in sociology that I love is that you sort of take an idea and you follow it through to its logical extension, like where does it actually go? And that sort of enables you to think really critically about, do we want this thing right now that we’re dealing with? And so an example and this is an old example. Several years ago, Amazon tried to build a machine learning model that would review applicant résumés and decide who would get interviews. They thought, you know, we’ll train it on the resumes of the people who already work here. And they created the model, they deployed it, and they found that it was deployed. It was already out there deciding who should get interviews and who should. Not that it was disproportionately not recommending women for roles. It is like it was making statistical associations between its common in American society to put women as an adjective to a sport or to a chess club or whatever it is. And so it was analyzing essentially the resumes of those who’d already worked there and said, oh, we don’t have a lot of people like the captain of the women’s baseball team. And even that we use like softball and baseball. Those are gendered in America. And so essentially, they deployed this and they were then realized that it was essentially discriminating against women, and they tried to fix it. Pause for a second. Amazon tried to fix it and they could not. So if they couldn’t, what does that imply for all these other algorithms that are out there, all these other models that are deployed and making highly consequential decisions for people that allocate resources. And if you look at the AI act, in almost all the use cases, that development is interested in international development, things around education, access to finance and banking, access to agro fertilizer inputs. There are efforts to use models to disperse that to the right farmers in the right amount at the right time efficiency, effectiveness, especially with low resources. I see the reason why you’d want to deploy that. And yet there are all sorts of ways that inequalities are not only remade through that, but they actually become amplified because when an AI model makes a decision, then that decision becomes a new data point in reality. And that’s how we get that amplification effect. And so I think in the social impact sector, folks are really excited about doing good, as am I, and I want to figure that out. But we need to really pause and say the EU AI act has categorized almost every use case as high risk because it allocates resources that influence people’s lives. If you haven’t yet played around, I highly encourage you to go into an AI image generator. I think they’re amazing tools for visualizing bias and inequality in AI, so it’s harder to see that with ChatGPT with text-based outputs, but you can actually visualize it with image generators. I turned 40 this year. I cannot believe that but I put into an image generator a 40-year-old woman, and what came out was a woman with loads of wrinkles with a ton of gray hair. I had not specified race. And so of course, it provided a white woman, a very wealthy looking woman that had big boobs and a thin waist. That’s not me. And so you can see all these ways in which inequalities are embedded into these models. And I really want us to take it seriously.
Andi Simon: Oh, but it is serious. That isn’t casual. We now have a whole world of technology that’s even more biased than the human is biased and our bias. You said something at the beginning, you learned to diminish your biases by being a child in schools with people who are different and without remembering, we have a story in our mind, and it’s that story that we live.
When you were describing that your story was built in a diverse world where you learn to trust people who weren’t looking like you or acting like you. Now we’ve got a UN technology world emerging, built on the data that biased world has created to enhance and create more bias. And this isn’t this isn’t going to I mean, if Amazon couldn’t fix it, we have a problem. If the big guys can’t sue because I don’t know how you do fix it, it’s based upon an analysis of data that out of context has no meaning, and it’s giving it its own meaning, and therefore it will come up with new stories that are new science fiction stories.
Dr. Emily Springer: Absolutely. I mean, when it comes to companies wanting to decrease their risk profiles, okay, if the creator of any AI model determines what the definition of success is, if you want to decrease your risk, you can absolutely use an AI model to do that. However, if you are interested in some sort of relationship with society that you don’t want to be disproportionately denying, African American individuals or people of color or minority communities in whatever society you’re operating in. If you don’t want to disproportionately deny them loans, then you need to go back and work with your team to change how that’s going to operate. A lot of people tend to say garbage in, garbage out, and they’re often talking about the quality of the data. And I think actually, if we abstract away a little bit from that, I wish people, I encourage all of you to start thinking instead of garbage in, garbage out, think inequalities in, inequalities out, because the world is an unequal place and all AI is doing is taking that inequality and predicting it forward in time. Now, if you’re decreasing your risk profile, you could say that qualifies as a minimum viable product. I disagree with that decision personally because I’m interested in stakeholder value and company relationships with society. But we need ways to really understand this because in the social impact sector, inequalities in inequalities are unacceptable. That doesn’t work. And so we need to be thinking very deeply about what it is that we want coming out of what might be our business imperative or use case for using this model, and what is our desired social relationship with our different consumer groups. And this is also about consumer trust. People are going to wake up.
Andi Simon: That’s just what I was going to say. So the biggest problem, in some ways, is that if you trust it to be unbiased and equal and it’s fundamentally not, then we’ve built a whole additional world that’s going to accelerate, amplify the bad things and not necessarily create a better world at all. And that leaves us with lots of work for all of us to do. Because if you don’t understand what you don’t understand and you’re going to think of this as truth as opposed to just big data coming back and doing stuff, you’re going to have a hard time living in a world that’s being influenced by bad data. This has been a wonderful conversation. I’m unhappy about stopping it, so we will have to continue it in six months or so. But it is time to wrap up, Emily, a couple of things you want the listener or the viewer not to forget, because they often remember the end better than the beginning. And this has been a rich conversation I’ve been tracking, but I have a hunch they’re going to go back and watch it again, to hear the threads that took you from where you were to where we are now, because it is a very exciting, scary, emerging time. Some last thoughts.
Dr. Emily Springer: Yeah, absolutely. Well, I really love your see, feel and think model. And I was sort of thinking about that with respect to AI. And so I really want to encourage people to see AI differently than the way that mainstream media is presenting it. Or maybe your CEO is presenting it. We need to go beyond the hype. We need to look at who’s doing the labor. Where are they located in the world? How are they being paid for that work? Who’s benefiting from these models? How are those benefits distributed and really like to dive into seeing what AI is in a larger way? Not only is AI an AI model, a sociotechnical construction, but we need to think about AI within society much more broadly. I want you to feel your own power and relationship to AI. I want you to give yourself a moment and take stock. What is your starting relationship to technology? How do you feel about it? What informed that in your life? Are you starting a relationship with AI where you trust it? Why or why not? And how confident do you feel about talking about AI. There are so many, I hate to say it, but predominantly men who are out there talking about AI and I’m sorry, excuse my language, but they are bullshitting. They do not know what AI is about, but they’re using it in a way. And that’s not everybody. That’s a large generalization. Absolutely. but I think in offices and workplaces we need to find ways. And I think a lot of the listenership here is women, and I work predominantly with women and women’s empowerment. And I think there are ways that we need to really build confidence around AI because of those discussions. It’s hard in an interpersonal conversation with a coworker, it’s hard to actually engage with someone who’s presenting very confidently, and you might not actually feel super confident. and so I think we need to be building our confidence that we belong in AI discussions and that we have, really our lives and the way we feel about AI. This is a huge moment in society. Your opinion, your thoughts, your experiences, they all matter.
And they should give you confidence to have an opinion about AI. I want people to really think about what kind of society they want to live in. How do you feel about surveillance? Where do you get your news from? Why do you trust that news? How do you feel about the role social media plays in your life? So I want you to think and reflect on all of that. But lastly, I think we need to start doing it when it comes to AI. Everybody takes time, upskill in AI and experiments. Nobody knows what they’re doing. You need to go into AI, interact with ChatGPT as if you’re learning how to ride a bicycle. You don’t expect yourself to know how to ride a bicycle, so don’t expect that of yourself and your relationship with AI models. Experiment. Try different things. There are all sorts of different opportunities out there to interact with AI models. And lastly, get a new role of volunteer. schools are interested in adopting AI. Join a parent teacher association like subcommittee about AI in the schools. Volunteer at your company for if there’s an AI policy that needs to get written, get in the game. Yeah. so really start doing when it comes to AI.
Andi Simon: Thank you for expanding my see field. Think and do. just to put in context see field thinking do. I’ll give you two last thoughts. Many years ago, I was an executive at a savings bank, and I bought their first computer. You can only imagine the hatred in the eyes of the secretary who had electric typewriters. It was IBM country, and they were perfectly happy with white out. And what will I do with my white-out? And how will this? It was a whole new way of seeing, feeling, thinking and doing. And they initially rejected it and of course didn’t come back. You know, I installed ATMs. ATMs were going to kill all of the tellers. Didn’t do that at all. But the interesting part is how humans resist the new and the unfamiliar. It protects them. They don’t want to lie in coming around the bend to eat them for dinner. And so the human mind has evolved to be a resistor to the new. If it’s not familiar, you flee it, you fear it, you appease that, you fight it, but you don’t love it. And when you do love it, you don’t know what you’re loving because you haven’t taken the time to really understand it. And so in some ways, I feel like it’s white out. You know, the world I know today I won’t be able to use. But tomorrow. I don’t really know how to use that either. And I hate to be betwixt and between. I’m going to dig a hole and be an ostrich and not come out. But you have to come out because this is a way, even for that lovely lady who needed to eat. I mean, what we really need to do is apply our creative curiosity and smarts to help humans live better lives. All humans.
Dr. Emily Springer: Absolutely right. Absolutely.
Andi Simon: And this has been absolutely delicious. Wonderful conversation with Dr. Emily Springer. If they want to reach you, what’s the best place to get to you, your website or LinkedIn?
Dr. Emily Springer: Yeah, absolutely. Well, if you want to reach me, you can check out my consulting company website, which is Technosocioadvisory.com. There I have different opportunities and roles that I could play in different companies or projects. And then if you’re interested in upskilling in AI, I try to take a concepts approach and really help people build from the ground up. So I don’t run tips and tricks. AI training, which is a lot of what I’m seeing out there. I want to teach you how to think really critically about this and build your capacity so you can go forward and think critically about what’s coming up in your life, whether that’s in your personal life or in the office in any way. So if you’re interested in that, I do AI on demand AI training at the inclusiveAI lab.com, and I also do custom training for businesses if folks are interested in that.
Andi Simon: We’re going to push this out and celebrate the opportunity to share Dr. Emily Springer’s story with all of you, and you share it because these are times that are changing fast. Humans don’t like to change, so it’s time to make change. Your friend and I preach that often so that you can see, feel and think of new ways. The times they are changing. My books I always like to end with, you know, go read more. And my book, On the Brink, plans to take your business to new heights. Amazon loves you. and it is all about how a little anthropology can help your business. See what’s right there, and it gets stuck. Now Rethink: Smashing the Myths of Women in Business, meet a woman who smashed that myth and Women Mean Business just came out in September 2023. It’s going gangbusters, and it is just more fun to share the successes of women, women with businesses of purpose and women who just want to make the world a better place. And I do too. This has been wonderful. Thank you for joining me today. I truly appreciated it.
Dr. Emily Springer: Thank you so much.
Andi Simon: Saying goodbye. Now remember, our job is to take your observations, turn them into innovations. That’s what a little anthropology can do or sociology. But times they are changing. So time to see and then make it happen. Bye now.