Back To Blog

On Tech Ethics Podcast – Why Ethical Tech Is a Competitive Advantage

Season 1 – Episode 42 – Why Ethical Tech Is a Competitive Advantage

Discusses how the development and deployment of ethical technology products is a competitive advantage for companies.

 

Podcast Chapters

Click to expand/collapse

 

To easily navigate through our podcast, simply click on the ☰ icon on the player. This will take you straight to the chapter timestamps, allowing you to jump to specific segments and enjoy the parts you’re most interested in.

  1. Podcast Introduction and Disclaimer (00:00:03) Host welcomes listeners to On Tech Ethics with CITI Program, introduces the topic, and provides an educational and legal disclaimer.
  2. Introducing Jennie Baird, Robert Levitan, and the Ethical Tech Project (00:00:42) Jennie Baird and Robert Levitan introduce themselves and outline the mission of the Ethical Tech Project.
  3. From Early Internet Optimism to Ethical Tech Today (01:57) Reflection on early internet ideals, unintended consequences, and why ethical technology matters now.
  4. Mission and Work of the Ethical Tech Project (02:39) Overview of educating and equipping technology and AI builders to make better decisions.
  5. Entrepreneurial Lessons and the AI Inflection Point (03:18) Robert Levitan shares lessons from past technology waves and why AI represents a more powerful moment.
  6. Regulatory Shifts and the Business Case for Ethics (04:21) Discussion of the current deregulatory environment and why ethics must stand on business value.
  7. Why Ethical Tech Is Critical in the AI Era (05:42) Why AI’s power, speed, and scale raise the stakes for responsible design and deployment.
  8. Ethical Tech as a Competitive Advantage (08:28) Framing ethics as a practical, accessible competitive advantage rather than an abstract ideal.
  9. How Ethics Drive Better Business Outcomes (09:01) Examples of how trust, transparency, and guardrails influence customer and enterprise decisions.
  10. Values-Driven Companies and Consumer Trust (11:33) Why consumers and partners prefer companies that clearly articulate and live their values.
  11. Ethics vs. Constraints in a Competitive AI Market (13:37) Reframing ethics as smart risk management rather than a brake on innovation.
  12. Agency, Transparency, and Human Responsibility in AI (18:33) Discussion of transparency, agency, and the role humans play in shaping AI’s societal impact.
  13. Embedding Ethics into Product and Engineering Culture (27:02) Why ethical thinking must be built into teams, workflows, and decision-making processes.
  14. One Practical Takeaway for Builders (31:54) Advice for product and engineering teams to broaden perspective beyond immediate deliverables.
  15. Final Reflections and Closing Remarks (36:28) Guests reflect on responsibility, agency, and shaping the future intentionally as the episode concludes.

 


Episode Transcript

Click to expand/collapse

 

Daniel Smith: Welcome to On Tech Ethics with CITI Program. Today I’m going to speak with Jennie Baird and Robert Levitan, who are the co-chairs of the Ethical Tech Project. We are going to discuss the importance of ethical technology development and implementation. Before we get started, I want to quickly note that this podcast is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have questions or concerns about the relevant laws, regulations, and guidance that may be discussed in this podcast. In addition, the views expressed in this podcast are solely those of our guests. And on that note, welcome to the podcast, Jennie and Robert.

Jennie Baird: Thank you, great to be here.

Robert Levitan: Hi, great to be here.

Daniel Smith: It’s wonderful to have you. To kick it off, can you each briefly introduce yourselves and share a bit about the Ethical Tech Project and the work you’re focused on today?

Jennie Baird: I’ll lead off since I came to the Ethical Tech Project first. I’m Jennie Baird. I am a longtime technology and media executive. I was most recently the Chief Product Officer at BBC Studios, the commercial arm of the BBC. I joined the Ethical Tech project, gosh, five years ago, I think, when the organization was founded. And the idea was that many of us who were old school, who started building technology in the dawn of the internet and the beginning of this digital era, we thought information wants to be free and this was going to be a utopian vision of the world because of the internet, and things didn’t really pan out exactly the way we thought. However, I think all of us who are involved with Ethical Tech Project consider ourselves techno-optimists. We still believe there’s a lot more value creation and value coming from this connected world and that things can be better.

So we started from an ethos of being sort of a group of technologists who sort of believe great technology should make the world better and that we wanted to do something about that. We initially were focused on a lot of issues around data privacy and data dignity. And then obviously, as generative AI has sort of eaten the technology landscape, we are focused much more now on AI. And when we talk about what our organization does, is we educate, convene and equip technology and AI builders. This could be product managers, engineers, designers, company builders, founders.

We have a curriculum that we work with, that we train to educate those people who are creating technology to make better design, go to-market and engineering decisions when they’re bringing their products to market. And I was very, very, very lucky a few months ago, to meet Robert Levitan, who is going to introduce himself, but he is a legend in this digital world. And he has been sort of a thought leader talking about ethical tech for some time now, and he agreed to join me and join our board as Co-Chair recently, so I’ll hand it over to Robert.

Robert Levitan: I don’t know if there’s much to say, but thank you, Jennie. Very briefly, I am an entrepreneur, I’ve started six companies. I’ve raised hundreds of millions of dollars of venture capital. I’ve had good outcomes, bad outcomes. I’m also a student of history. And during the last year, I started a post on LinkedIn. What’s happening in this moment in my perspective as an experienced entrepreneur in the early days of the internet and as a student of history? And lo and behold, yes, I talked about lessons learned and things we should be thinking about, unintended consequences.

And Jennie saw it and Jennie reached out to me and said, “Hey, would you like to join me and do this?” So I’m so glad she did. And we are co-chairs of the Ethical Tech Project. It is a nonprofit 501(c)(3). It is underfunded and needs to grow to meet the moment, but I’m really excited to be doing the work because I can’t think of anything more important right now.

Jennie Baird: If I can just build on that a little bit, I think in a different administration, and especially when we were focused on data issues, data privacy, data dignity, there was a sense that we had regulatory tailwinds, that there was going to be a lot of support from government that would drive companies to do the right thing and to be pursuing the right thing. And what we’ve discovered in the current regulatory climate where it seems like we’re in an anti-regulatory moment, that we are really having to talk to businesses and business leaders about the reasons why even without regulation and regulatory pressure, ethical tech is the right thing to do because it’s good for business. That’s really a lot of the message that Robert has been out in the market, talking about at this moment, because he comes from that really commercial standpoint of building companies.

Daniel Smith: You both talked about how you were there for the dawn of the internet and there was ideals that existed at that time that maybe didn’t quite come to fruition as people hoped they would. So now with the rise of AI and the advancement of AI, why is ethical tech such a critical issue for organizations right now?

Robert Levitan: Yeah, I mean, I’ll jump in. I think there’s several reasons. Number one, we don’t want to repeat our past experience, which is, we get all excited about the potential of the technology and we forget that there will be unintended consequences, and we must think about them now, not later. That’s number one. And by the way, I think we’re less naive now than we were, obviously, in 1995 when I launched my first internet company. But still, we’re just in this competitive race and people are not thinking enough ahead and thinking about guardrails. But number two, the second reason is this technology is much more powerful.

The internet was built at a time when it had impact and over time it had a lot of impact, but this now built on top of the international and on top of the connected world is so much more powerful. And I mean, the last thing I’ll say is three, I think this is a moment that we as humans have in some ways, more agency than we’ve ever had. We have access to more knowledge, and now we have access to AI. So it’s not just more knowledge, it’s also tools to implement the history of human knowledge on our society. And if we don’t realize that this is the moment we are literally designing the future of humanity, well, there’s something wrong with us.

Jennie Baird: We have sometimes conversation about being gray beards and sort of older generation, and we definitely have the experience of building something that is going to be transformative for society, but also where things can go off the rails. And I think that that’s sort of why what we’re doing is so important now. And I agree with Robert, it’s like the power that AI brings us is so much. And my colleague, Vivienne Ming, has a book coming out called Robot-Proof, but I think the fundamental premise there is that AI can make us much better, but AI can also make us much worse. And that’s a very challenging line to walk, that if you want to capture the benefits of AI, which I think we all do, especially as techno-optimists, you’ve got to be very mindful and thoughtful in how you are deploying this technology. It doesn’t happen just like this.

Daniel Smith: I really like your grounding of how ethical technology is a competitive advantage. I think that that’s an approachable way for people to understand the importance of ethics as sometimes they can get lost in some of the existential risks and things like that. So on that note, can you both just talk a little bit about a few examples where ethical design or responsible decision-making has led companies to better outcomes for a product or the company as a whole or its users?

Robert Levitan: Sure, I’ll start. And yeah, I think this is so important. It’s not just the right thing to do, ethics are good business, period. Why is that? And you mentioned it. There’s this sense of risk, there’s this sense of possible doom. There’s fear, but there’s also excitement. So when you have equal parts fear and excitement, what are you going to do? Because you can’t understand this technology. You’re going to work with companies that have some ethics. And what do I mean by that? You’re going to work with companies that state their framework on what they will and won’t do, their level of transparency, their level of safety guardrails. Why? Because you as an individual or you as a business do not want to just play roulette with the risks. You want to decrease the risks while getting the benefit.

So to me, an example, and we’re seeing it in the marketplace, is business customers that are licensing LLMs are moving quickly from one company to another. They’re moving quickly from OpenAI, API calls to Anthropic API calls. Why is that? Is there really a difference between the two? I think so, but maybe not as much as in the PR space. But clearly, Anthropic has done a much better job having a constitution. “These are our principles, this is what we believe. Number two, we’re going to give our business customers more transparency into how the system’s working. We’re going to let them build guardrails on top of it.”

They are doing a much better job of saying, “You work with us and you’re going to have less risk and same or more upside.” Now, there’s some movement happening. Just this morning there was some movement in that where something was announced that they’re getting rid of some because the competitive pressures are so strong. But I do think in general that if a company has, I’m going to say two things that are key, an ethical framework that they share, “This is what we will and won’t do,” or, “This is what we believe about AI.” And number two, guardrails. “This is kind of safety and transparency procedures we put in place.” If you have an ethical framework and you have guardrails, you’re more likely to attract consumer and businesses to do business with you. Jennie?

Jennie Baird: Well, I was going to say we’ve done a ton of research that consumers prefer to do business with companies that they can trust. So I think that companies that are mindful and thoughtful and put their values first have a competitive advantage. I think there’s also a whole series of companies today that are putting a stake in the ground about what matters and building businesses around that. So I’m thinking of companies like ProRata AI, which is building a marketplace when LLMs say there’s no way to measure where we trained our models from, and they’re building models and processes whereby content creators can get remunerated.

That’s a great business where doing the right thing is great for a business. Another business I think, is called Scope3. They’re doing their businesses around using AI to figure out what your carbon impact is on your ad tech stack. So there’s a lot of really interesting companies who are saying, “We’re values forward, there’s things we care about.” And I always talk about this when we talk about ethical tech. It sounds like something philosophical and esoteric. Ethical tech, to Robert’s point, is about knowing what your values are, being able to articulate those and being able to do the things that reflect those values in your business.

Daniel Smith: Jennie, going back to what you were saying earlier in the conversation, you mentioned that we’re currently in a deregulatory environment. And a common, I think, feeling among people is that ethics might be seen as a cost or a constraint on progress, especially in this hyper-competitive AI market that we all have been seeing going on for the past few years. So in that case, what would you say to somebody that might feel that way and how would you convince them that ethical tech actually is a competitive advantage and it’s not the constraint that they might be feeling that it is?

Jennie Baird: First of all, in the US, we’re in a deregulatory environment. I think in Europe, it’s different and it’s still a very highly regulated market. I’ve worked in both markets. It’s hard to work in a highly regulated market. It’s hard to innovate, those things are true. However, I think what we are trying to do and where we see the value for companies is to think first. So there’s always that sense that if you’re in a regulatory environment or whatever regulatory regime, you have a risk, right? There’s risk. The reason why people do the right thing or ethical thing is because there’s a risk of violating regulation or policy and being fined, and that’s the risk. But I think that we have regulation to help us avoid risks, actual real risk, not just to not get a fine.

But I mean, I don’t think any of us really wants to be in a situation that we built a machine that destroyed someone’s life, livelihood, society, institutions, et cetera. So I think there’s literally the thinking about, “What are the potential outcomes or the potential unintended consequences of your work?” So we do an activity with the Ethical Tech Project, which is where we analyze the Facebook algorithm over time. And we lay out all the changes in that algorithm over 15 years and what was Facebook optimizing for. And I think that many times when you look at the algorithm, Facebook was optimizing for what they thought was good for business. And I think a lot was around this engagement metric.

And one of the things we see is that when you optimize for a single metric, you wind up with blinders on and you don’t see all the potential unintended consequences. And we’re seeing that now, for example, every time you put a prompt into ChatGPT or Claude, your conversation ends with another question. “Can I help you with that?” And the metric there is clearly engagement, it’s attention. It’s the same thing, these machines and the way that we are designing products is to keep us in the product, not to keep us in the world or to solve our problems. So I think when you talk about what is good for business, is it really good for business if we’re all just sitting here in the machine, or is it good for business if we all have active economic lives outside of the machine?

Robert Levitan: Let me jump back to the question about constraints, because I like to use a couple of examples that are very simple for people to understand. There are easy constraints that we can implement that do not slow down innovation. Nobody wants to slow down innovation, that’s not what we’re asking for, and we’re not counting on regulation. But let me give you two examples that I think are just really easy to understand. We all saw what Grok did with making it easy to nudify images. We all agree a 10-year-old kid should not have their face lifted, nudified, and then have their image spread across the internet. We all agree on that. That’s easy, there’s no question. So that’s what we’re talking about.

Does that inhibit innovation, to say that’s not allowed? I don’t think so. It’s a really clear example. So why don’t they do something about it? Because we as individuals need to say at this moment in time, “That’s not acceptable, it shouldn’t happen.” Should the government do it? They should. But anyway, that’s a whole nother question. But let me give you another example because again, we are moving to a world where we obviously don’t know what’s real and what’s not. And I wrote a post about this where Channel Four in the UK on BBC did a story about the impact of AI on jobs. And they had a reporter talking about this and they had interview segments with different people across different sectors of the British economy.

And at the end, the reporter says, “So clearly, AI is going to have an impact on all of our jobs. In fact, I am AI.” Now, hello. Can we all agree that we want to live in a world where if somebody looks, sounds, acts like a human and is reporting the news, that that maybe is stamped as, and let us know it’s AI? That doesn’t impede innovation, that’s not a constraint on innovation. It’s simply, that’s the world I want to live in. And I think most of us do. So why don’t we make-

Jennie Baird: Robert, you touched on two of our key principles at Ethical Tech Project, agency and transparency. Transparency is that, so we’ve worked with the folks who’ve worked on the C2PA Initiative, which is the Coalition for Content, Provenance and Authenticity. So what are the methodologies that we can do that we show whether something is AI generated or human generated? But also, what are the ways that we’re transparent about our models, how they’re working, what they’re built on, et cetera? Transparency is very important. Agency, I wanted to talk about this moment in history, both this anti-regulatory moment, but this moment in time, and you gave the Grok example. And there’s a weird thing happening, and Robert and I were talking about this the other day, is it’s almost like the bigness of the machines and the bigness of platforms has made us feel that we don’t have agency.

That if the big tech platforms are doing it, we are powerless against that. Because we are seeing over the past two years, a lot of breakdown of norms. Forget about laws and regulation, but what are social norms? Because that’s a lot of what Robert is talking about. You can get through with your legal team and make a case for just about any crazy thing. But as human beings, what is the world we want to live in? What is the world as creators and creators of technology and businesses and so forth, what is the world that we want to create? And there’s something that almost like we’ve been trained to just like, “Oh, we’re never going to change that and just let it go.” And that’s a little bit what I’m worried about is happening in this AI moment.

Robert Levitan: Yeah, we must not resign ourselves to that. We must not accept things that are not what we want.

Jennie Baird: And things don’t have to be like that. I mean…

Robert Levitan: Yeah, I recently wrote, if you love this example, “We must not be deer caught in the headlights of a car.” Instead, we should think of ourselves as early human beings who just discovered fire and we must make a decision, “What are we going to do with that fire?” Are we going to accept somebody taking the fire and burning down their neighbor’s house, or are we going to say, “No, no, no, you can’t do that. You can use the fire to cook food. You can use the fire to heat your home, but you’re not allowed to do that other thing.” That’s our choice.

Alexa McClellan: I hope you’re enjoying this episode of On Tech Ethics. If you’re interested in hearing conversations about the research industry, join me, Alexa McClellan, for CITI’s other podcast called On Research with CITI Program. You can subscribe wherever you listen to podcasts. Now back to the episode.

Daniel Smith: When it comes to conversations that you’re having with companies about some of these ethical principles like transparency and autonomy and things like that, how do you convey to them, the long-term impact? Because I think something that strikes me is that there’s kind of a tension between the short-term gains of trying to remain competitive in the marketplace and then doing the work that’s going to make you long-term successful.

And I think going back to what you were talking about earlier with Anthropic, that’s an example of where we can see that playing out currently, where maybe Anthropic started off with an ethos of being the more safety-minded AI company, but they were possibly lagging behind in the market a bit, but we’re now really seeing them come to the forefront of being a leader in this space. So I would just like to hear your thoughts more on the conversations or how you’re thinking about the long-term versus the short-term when it comes to this.

Robert Levitan: I’ll share one real example without naming a company, but this of course, is the struggle. And can we convince companies, can companies internally convince their stakeholders, meaning their internal stakeholders, external, their company stakeholders, that this is important and will have positive impact? Let’s just say this company is a big data company that works with a lot of public institutions, including government agencies across different states and the federal government. And we said to them, “You’re already using AI, but we get it, you’re busy. You care about AI ethics, you care about being a responsible AI leader, but we get it, you don’t have a department internally to do that and maybe you can’t keep up with everything, but we can be your partner.

“So let us come in, let us partner with you. Let us help your engineers and product managers think about when they’re building products, where things could go wrong, things to take into consideration to make sure that you’re not building something that has risks. Let us help you think about what your ethical framework is as a company, et cetera, et cetera. And then we will promote you as a responsible AI company.” And they said, “Oh yeah, that’s a good idea. We’d love that, but you know what else? We bid on a lot of government contracts and they always ask us about safety.”

And we will say that if we’re working with you, that we are kind of certified as an ethical tech project partner or we are making every effort to minimize risk. And so it’s an example of long-term, you’re going to get big contracts from big partners for big money, only if you can convince them that working with you has less risks than maybe working with somebody else. So that’s kind of the best we can do in some ways about what’s going to happen because we really don’t know. But yes, the competitive pressures out there are really strong and may make people do the wrong thing. Jennie, anything to add to that?

Jennie Baird: No, I was just thinking as you were talking, I mean, one of the challenges of working in this space is that if doing the right thing was easy and clear and we knew what to do, we would all be doing the right thing all the time. It would be a non-issue. The concerns around these ethical issues is it’s not one size fits all and there’s often trade-offs. So sometimes we talk about privacy. Privacy can be really intentioned with child safety when you’re protecting people’s privacy and maybe they are producing CSAT or whatever. But I mean, a lot of the issues that we work with on companies, they’re not really clear cut and you are making trade-offs and you are sort of managing tensions between one area of your business and another area of your business.

And I think for any people like me who grew up in product and worked on product and engineering teams, we are often kind of order takers that a stakeholder will say, “Oh, can you build X, Y, Z for me?” And we get off on building stuff so our answer is always, “Yes, we can build that, we can figure out how to build that.” And we always think that it’s about building the thing that our stakeholder wants. One of the things we’re doing is trying to train people who have a lot of leverage where the people who build things, to open their aperture and see, “Oh, yes, I could build that thing, but maybe there’s another way to build that thing,” or, “Maybe I shouldn’t build that thing because it has [inaudible 00:26:43] or maybe we don’t own the rights to that IP that we’re using to build that thing.”

And so I think that when we talk about competitive advantage for companies, we’re also talking about training their staffs to see more broadly and to have better communication across teams, so we’re all on cross-functional teams. So can you communicate better? Can you develop better processes that create efficiency? I have been on many product teams, I hate to say it, that we built a product and we got to the end and it was like, “Ooh, this has a problem once the legal team has vetted it.” Or, “Ooh, we didn’t realize that we couldn’t go to market with that because it would cost us millions just to get it into the market.”

I mean, that is so typical. So one of the things that we’re trying to do is create a measure twice, cut once mentality for folks working with technology so that they’re thinking about a lot of their choices upfront and making decisions upfront as a cross-disciplinary team to avoid some costs in product development that maybe are products they can never bring to market or products that maybe they bring to market fast, but are buggy, or there’s not a product market fit for. There’s so many reasons why doing the right thing can benefit your business.

Daniel Smith: Another thing that strikes me based on what you were just saying is that really, in order for ethical tech efforts to succeed, it needs to permeate throughout the organization, the ideals and the principles and things like that, and almost from a top-down type of level.

Robert Levitan: Yes.

Jennie Baird: Top-down and bottom-up.

Daniel Smith: Indeed.

Jennie Baird: Because I think one of the challenges of being a leader is that leaders can say words like, “We stand for this.” And even, I mean, one of the exercises we do is we look at the value statements of companies, which often have things that are also unstated, which is, “We believe whatever,” and to make a profit. But a leader, I think often thinks that they’re saying the right thing, but that’s not necessarily how things are being done seven levels down the chain where stuff is being built.

And I think it’s really important, going back to agency, that people within a company feel that they have agency to do the right thing, and to have open channels of communication and processes. One of my dreams would be to reinvent the agile process, so just like we have retrospectives at the end of sprint, is there an ethics moment where we explore the ethical tensions in what we’re doing? So I think it’s both top down, but bottom up, and many times the people at the top really don’t have great visibility into what’s happening in every step of the process. That’s my opinion.

Daniel Smith: You mentioned the kind of rethinking of some processes like the agile process. Are there other strategies that leaders or product and engineering teams can implement to make sure that ethical tech efforts succeed?

Jennie Baird: We actually have a ton of frameworks, checklists, different things, activities that you can do as a team. One of the things we talk about is building your own personal code of ethics. So to understand your company’s code of ethics and your code of ethics, do those line up in what you’re doing? That is one activity that you could do. We have data strategy. Every company will say they have a data strategy, but if you ask people on the team, “What is your data strategy and how do you live it every day in your job and in the products you produce,” no one can answer it.

They’re like, “But we have a team that does our data strategy.” So we’ve said we’ve had a data strategy. It is often like that. Now one of the things we’re seeing is that some companies are developing an ethics board or they have a company ethicist, and that person really generally and that organization generally sits outside of what’s actually happening on product teams and in teams. And there’s a problem with the kind of check the box like, “Oh, compliance, privacy and compliance. We do that at the end.” Those things need to be built in upfront, so there’s a number of things that companies can be doing.

Daniel Smith: And when it comes to implementing those efforts, what are some of the most common mistakes or blind spots you see teams fall into when trying to build or scale ethical tech?

Jennie Baird: We see very few teams that are actually making this a priority.

Robert Levitan: My mantra in the early days of the internet were very simple, “Launch, listen and learn.” I even trademarked that for a little bit, thinking that was cool. It later became, “Fail fast.” They changed it, made it shorter. The reality is no business can afford to just launch, listen and learn right now. I mean, with AI, you launch something and you don’t think upfront about these two basic things. Again, ethical framework. “What will we do and not do with AI? Where’s our line here?” Involves a lot of things, involves privacy. “How will we use our customer information? How will we pull data from other sources? And then how will we deploy it? And how autonomous are we making our AI agents,” et cetera, et cetera.

There’s a lot of ethics right there. But the second is, “Have we identified the points of risk and are there any guardrails we can build that don’t inhibit our innovation?” Just every company should simply be thinking about that. You still want to strategic prototype over endless strategic planning, meaning you still want to launch, listen and learn. But before you launch, listen and learn from the real market, you got to look inward. “What’s our ethical framework? Where can it go wrong, what are the guardrails?” I mean, it’s the right thing to do, but more importantly, it’s good business.

Daniel Smith: And as we look to wrap up our conversation here, I do want to ask just two final questions. And the first one is, if I’m somebody on a product or an engineering or other team, what is one practical thing that I can do at this point in time to help further ethical tech within my organization?

Robert Levitan: Well, Jennie trains people on this stuff. Every day, we have an ethical tech project fellowship program where we actually teach this stuff. So Jennie, how do you synthesize? You have hours of this. How do you synthesize? What’s one thing or a couple of things that product managers and engineers should be doing?

Jennie Baird: I would say just one thing is open your aperture. I think very much as product and engineering people, we are very focused on delivery, that we have a thing we’re trying to complete in front of us and we’re very motivated by that. And what we’re really asking is for people to take their blinders off and understand that their thing that they’re building lives within the context of something greater, and that there are things beyond what’s just in their deliverables roadmap.

Daniel Smith: I think that’s a great piece of advice. And I know, Robert, you mentioned the fellowship and I’m sure there’s other resources out there. So do you both have any suggestions where our listeners can learn more about ethical tech and also the work at the Ethical Tech Project?

Robert Levitan: Sure. I mean, first of all, we’re easy to reach. I’m robert@ethicaltechproject.org, and she’s Jennie, J-E-N-N-I-E, but that’s easy. But you can go to our website, reach out to us. But really, we’re looking for companies that want to be leaders in responsible AI and ethical tech. So reach out, we’ll come in, we’ll help train, educate. We’ll set up meetings, bring in experts, share research. We’ll help you be leaders. We’re also looking for these product people, product managers, engineers who are building these AI and technology products, because we will train them and our ethical tech project fellowship is free. I mean, it’s a free, 10 one-hour sessions in the evening after work.

We’re doing one this spring in New York City. We did a couple of them in the past year. We’re going to try to do one maybe in Boston or the West Coast. We do have limited resources, but you can come, you could sign up as an individual, you could reach out to us and partner with us. We want to create a movement of people who talk about these things and let people know that it’s important and now’s the time to do it. And we have more impact than we think. We shouldn’t resign ourselves to, “The world’s going to be what the world’s going to be and it’s going to be dictated by the largest tech companies who are building AI system.” No, no, no, we have to be involved in what world gets built.

Jennie Baird: I love that.

Daniel Smith: I think that’s a wonderful place to leave our conversation for today, so thank you again, Robert and Jennie.

Robert Levitan: Thank you.

Jennie Baird: Thank you, again.

Daniel Smith: If you enjoyed today’s conversation, I encourage you to check out CITA Program’s other podcasts, courses and webinars. As technology evolves, so does the need for professionals who understand the ethical responsibilities of its development and use. That is why we developed our new Tech Ethics training solution. This new offering brings together practical, thoughtfully designed courses to help professionals navigate ethical and regulatory challenges with competence.

The courses cover responsible AI, software as a medical device and clinical decision support systems, big data and data science, data management, software development and more. Check out the link in this episode’s description to learn more. And I just want to give a last special thanks to our line producer, Evelyn Fornell, and production and distribution support provided by Raymond Longaray and Megan Stuart. And with that, I look forward to bringing you all more conversations on all things tech ethics.

 


How to Listen and Subscribe to the Podcast

You can find On Tech Ethics with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2120643.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guests

content contributor Jennie Baird

Jennie Baird – Ethical Tech Project

Jennie Baird, Co-Chair of the Ethical Tech Project, is a digital media trailblazer. Throughout her 30+ year career, she has driven growth and innovation for some of media’s most successful digital brands and legacy-to-digital crossovers. Today, she dedicates her work to supporting and advising tech for good businesses and organizations.

content contributor Robert Levitan

Robert Levitan, BA – The Ethical Tech Project

Robert Levitan is the Co-Chair of The Ethical Tech Project. He is an entrepreneur and Internet pioneer who has started six companies including iVillage, Flooz.com, and Pando Networks. Robert is a collaborative business leader and an expert in building strategic partnerships that rapidly accelerate business value.

 


Meet the Host

Team Member Daniel Smith

Daniel Smith, Director of Content and Education and Host of On Tech Ethics Podcast – CITI Program

As Director of Content and Education at CITI Program, Daniel focuses on developing educational content in areas such as the responsible use of technologies, humane care and use of animals, and environmental health and safety. He received a BA in journalism and technical communication from Colorado State University.